To search for model legislation, research, reports, and more, type your area of interest into the search bar above. You can filter your search by state, level of government, document type, and policy area to match the info you need to your unique community’s progressive goals.
Bossware refers to systems that closely monitor and actively manage workers’ activities and performance. In particular, modern worker monitoring technologies (i.e. sensors and cameras that track workers’ physical movements, handheld scanners scanners that track each item a warehouse worker scans, etc) allow companies to monitor workers’ physical movements and pace of work in unprecedented detail. Companies are using bossware to discourage and penalize lawful, health-enhancing employee conduct (such as taking rest breaks) and enforce a faster work pace with reduced downtime. These tactics increase the risk of psychological harm and mental health problems for workers. This report outlines current legal protections for workers and provides recommendations for policymakers and enforcement agencies to mitigate the harmful effects of bossware.
During the COVID-19 pandemic, many school districts have introduced student activity monitoring software and other digital tools aimed at facilitating remote classroom management and driving student engagement. Though helpful, these tools can also be intrusive. This report examines whether students who receive school-issued devices are subject to more monitoring than their peers who have their own devices. Findings show that students using school-issued devices are monitored to a greater extent than their peers using personal devices; that local education agencies (LEAs) with wealthier populations are more likely to have access to personal devices, which are subject to less monitoring; how LEAs are working to hold device and student activity monitoring software vendors accountable; and more.
Employers use algorithm-driven hiring tools as a fast and efficient way to process job applications in large numbers. Job-seekers are increasingly asked to record videos (that employers mine for facial and vocal cues), complete online tests (that are used to evaluate their optimism and attention span), and submit online resumes (that employers may use to reject job-seekers due to gaps in work history). Unfortunately, many algorithm-driven hiring tools fail to meet the Americans with Disabilities Act’s (ADA) prohibitions against the use of hiring processes that discriminate on the basis of disability. This paper seeks to highlight how hiring tools may affect people with disabilities, the legal liability employers may face for using such tools, and concrete steps for employers and vendors to mitigate the risks of disability discrimination by algorithm-driven hiring tools.
Misinformation and disinformation that suppresses voter participation can be deployed through a variety of media. Online voter suppression can range from inaccurate information about the date of an election, to inaccurate reports of long lines, to efforts to persuade people that an election is “rigged” and their vote wouldn’t matter. However, it can be extremely difficult to discern the intent behind a post on social media, and a malicious intent is not always necessary for a post to have a voter-suppressive effect. This short guide focuses on how to spot content on social media that can suppress voter participation.
Law and immigration enforcement’s use of facial recognition technology presents many harmful consequences: it permits invasive tracking and targeting that threatens individual privacy, prevents First Amendment activities (i.e. political protests and religious activities), relies on faceprints obtained without consent, and exacerbates the disproportionate harms faced by Black and Brown communities that are already subject to over policing. This report calls for Congress to enact the Facial Recognition and Biometric Technology Moratorium Act, which prohibits federal government acquisition and use of biometric technologies including facial recognition, with an exception for situations in which Congress has enacted a specific and robust set of safeguards. Until Congress implements guardrails to address the harms that use of facial recognition can cause, federal law enforcement and immigration agencies should cease all use of the technology.
Many disinformation campaigns are specifically designed with racist and/or misogynistic content, suggesting that disinformation is a tool used to promote ideologies like white supremacy. This report outlines findings from various studies regarding online disinformation, race, and gender. Results show that there were racially targeted disinformation campaigns aimed at suppressing votes from communities of color in the last three major elections in the U.S.; disinformation tactics include the use of “digital blackface/brownface”; Spanish-speaking communities are particularly vulnerable to disinformation; and more. Building on this research, this report identifies unresolved research questions around the intersections of online disinformation, race, and gender, and provides recommendations for how to tackle the related methodological and technical problems that researchers and others face in addressing these topics.
The rise of shared mobility services, including rideshare and micro-mobility options such as bicycles and electric scooters, poses new challenges for cities and states. They must manage new kinds of traffic, keep streets and sidewalks safe and accessible, and ensure that transportation services are equitably distributed. In order to make these decisions, local governments are compelling service providers to disclose records about where their users travel, when, and the routes they take. In order to safeguard their constituents’ privacy, policymakers should minimize the type and quantity of compelled data disclosure. In particular, this report argues that localities collect only aggregated data reflecting shared mobility service usage as opposed to compelling individual trip-level data. For localities that fail to heed this advice, this report also offers recommendations that seek to uphold the privacy interests of consumers and mitigate the legal risks to state and local governments.
Some schools and school districts are turning to social media monitoring as a response to the threat of mass shootings. Companies are marketing services claiming to identify sexual content and drug and alcohol use; prevent mass violence, self-harm, and bullying; and flag students who may be struggling with academic or mental health issues and need help. However, this report argues that social media monitoring software for schools is experimental, has limited efficacy, and is not a reliable method to predict mass violence. Rather, such monitoring techniques invade students’ privacy, discourage students from engaging in activities that are critical for their development, and disproportionately burden minority, underserved, or vulnerable students.