To search for model legislation, research, reports, and more, type your area of interest into the search bar above. You can filter your search by state, level of government, document type, and policy area to match the info you need to your unique community’s progressive goals.
Science and technology policy fellowships train scientists and engineers to use their expertise to advise government officials in technical matters to inform policymaking. Across the US, differences in legislative structures between states (e.g. legislative size, session duration, state resources) require state-specific fellowship design. This report describes two case studies of emerging fellowships in North Carolina and Virginia and uses these examples as a model to suggest how other states might implement similar policy fellowships. This report highlights the government structures in each of these states, focusing on how each unique type of legislature informs the most promising options for host locations, funding sources, and duties for fellows in each state. For coalitions to establish successful state science policy fellowships, the report recommends understanding the particular structure and needs of state governments, communicating with key stakeholders, and identifying additional opportunities for fellows to engage outside of the state government.
While Artificial Intelligence (AI) is often viewed as a neutral technological tool that brings efficiency, objectivity and accuracy, choices on design, implementation and use can embed existing racial inequalities into AI and create racially biased systems that produce harmful consequences. This problem in AI systems is especially consequential in the criminal justice system, where it is increasingly used by the federal government to replace or support judicial decision-making. This report addresses the primary causes for the development, deployment, and use of racially biased AI systems and suggests responses to ensure that federal agencies realize the benefits of AI and protect against racially disparate impact. There are three recommended actions for agencies to take to prevent racial bias: 1) increase racial diversity in AI designers, 2) implement AI impact assessment, 3) establish procedures for staff to contest automated decisions.
The rapid developments in Artificial Intelligence (AI) present a serious challenge to the American copyright system and future advancements in the AI industry. These industries rely on intellectual property protections to maintain equilibrium between productivity, remuneration, and competitiveness. American policymakers, however, have paid little attention to the intersection of artificial intelligence and copyright protection. This study collects data from AI scientists, tech policy experts, and copyright scholars which shows that while intelligent software is an important contributor to American cultural development, half of the respondents believe that the US Copyright Office is not prepared to deal with an influx of computer-generated works.
This report explores five privacy provisions to mitigate the harms associated with exploiting smart city-data. The policy proposals include differentiating personally identifiable smart city data from de-identified data, creating a warrant requirement for personally identifiable smart city data, limiting the sharing of personally identifiable information collected by smart city sensors, adopting data minimization requirements, and introducing private and public enforcement mechanisms. Taken together, these provisions can lay the foundation for creating a robust, privacy-protective response to the threats posed by unregulated access to smart city-data. In order to prevent the emergence of surveillance cities, the report urges states and local governments to implement these fundamental privacy provisions in their specific jurisdictions.
Face recognition is the automated process of comparing two images of faces to determine whether they represent the same individual. Across the country, law enforcement performs face identification to identify individuals who refuse to identify themselves, compare mugshots against existing entries within their facial recognition database, find suspects, and to conduct real-time video surveillance. This report outlines the risks associated with law enforcement’s use of facial recognition technology. In particular, law enforcement face recognition is unregulated by state law; major face recognition systems are not audited for misuse; most law enforcement agencies do little to ensure that their systems are accurate, and police face recognition disproportionately impacts Black Americans. Additionally, this report also provides recommendations that Congress, state legislatures, and federal, state and local law enforcement agencies can take to mitigate the harmful effects of face recognition.
Body cameras are rapidly becoming the norm in communities across the country. Campaign Zero reviewed available police department body camera policies from the largest 30 cities in America to determine whether this new technology is being implemented in ways that ensure accountability and fairness while protecting communities from surveillance. This was done by looking at factors such as camera usage and coverage, fairness, transparency, privacy, and accountability for the officers in these major cities.
Smart borders involve the expanded use of surveillance and monitoring technologies including cameras, drones, biometrics, and motion sensors to stop unwanted migration and track migrants. This report details some of the more prominent deployments of smart-border technologies. Beyond drones and automatic license plate readers, such technologies include integrated fixed towers (IFTs), ankle monitors, and migrant data analysis and tracking. This report also highlights core harms of US border policing, including a boom in the surveillance industrial complex, separation and undermining of families and communities, the maiming and killing of large numbers of border crossers, exacerbation of socioeconomic inequality, and more. Lastly, this report highlights key demands of various migrant rights and advocacy groups, which collectively would start to dismantle the border and immigrant control regime.
As a result of the COVID-19 pandemic, courts at all levels, from local trial courts to the Supreme Court of the United States, have conducted legal proceedings using teleconferencing remote technology. This report outlines how remote hearings and trials have the potential to cause significant harm to perceived and actual fairness, as well as to individual rights to privacy. In particular, this report discusses how courts must account for the digital divide, security vulnerabilities, potential fraud, and the risk of manipulated audio or video in evaluating online courts. Furthermore, this report also lists best practices courts should follow In order to prevent these possible dangers.
Law and immigration enforcement’s use of facial recognition technology presents many harmful consequences: it permits invasive tracking and targeting that threatens individual privacy, prevents First Amendment activities (i.e. political protests and religious activities), relies on faceprints obtained without consent, and exacerbates the disproportionate harms faced by Black and Brown communities that are already subject to over policing. This report calls for Congress to enact the Facial Recognition and Biometric Technology Moratorium Act, which prohibits federal government acquisition and use of biometric technologies including facial recognition, with an exception for situations in which Congress has enacted a specific and robust set of safeguards. Until Congress implements guardrails to address the harms that use of facial recognition can cause, federal law enforcement and immigration agencies should cease all use of the technology.