Policy and Accountability

I Implemented a Federal Government Executive Order on Technology in Mexico. Here’s What I Learned

And how it applies to the Biden administration’s AI executive order

 

 

 

November 15, 2023

While the US and other governments race to regulate AI, it is important to ensure that their sense of urgency does not undercut the intensive and often slow-moving work that sustainable change requires. Since the Biden administration unveiled its AI executive order and companion guidelines drafted by the Office of Management and Budget, I have been reflecting on what I learned when I worked to implement a federal government executive decree on technology in Mexico, ten years ago.

I joined the government of Mexico in 2012 to help build a national digital agenda, largely focused on transforming the government’s use of tech to make better decisions, enhance transparency and citizen engagement, and improve public services. At that time, colleagues at the UK government digital service offered a practical philosophy that resonated with many of us in the open government movement: the strategy is delivery. In short, this means striking a skillful balance between policy design and implementation, and making the structures you have work for the benefit of people’s lives in tangible and specific ways. It was this “science of delivery” that I had to learn quickly, by understanding how to navigate the structures in place and use existing vehicles creatively to advance policy goals.

The Biden administration’s new executive order on AI is monumental, reflecting a forceful mandate from the executive branch to mitigate AI’s impacts on people’s rights and safety. Now the hard work begins, to build a robust public debate about what we want AI systems to do and where there is a need for guardrails to mitigate their harms. Translating policy into practice will happen inside agencies, and through the people involved in the governance structures and coalitions that are built around them. Reflecting on what I learned from my time in government — the things that worked, and what I would have done differently — here I share a few lessons in the hopes of helping people working in government and in the field more broadly.

Lasting change has a better chance when the moment of top-down authority is met with on the ground legitimacy — especially when political winds shift.

The AI executive order, coming from the highest level, offers a chance to rethink policymaking and shift decision-making practices to focus on getting things done to meet people’s needs. Nurturing trusted relationships with community groups and civil society organizations, and establishing meaningful participation processes with the people you serve, should be baked into every agency’s strategy. The administration’s proposed OMB guidelines mandate that agencies consult affected groups in the design, development, and use of AI. For more on what this can look like, read Data & Society’s policy brief Democratizing AI: Principles for Meaningful Public Participation, authored by Michele Gilman. As Gilman outlines, public participation helps avert harmful impacts, adds legitimacy to decisions, and improves accountability. When participation is carefully built into all phases of the AI lifecycle and effectively shapes outcomes, policies and systems are attuned to real needs articulated by the people most likely to be impacted by them — and therefore have more public support.

To implement a strategy, people need the right leadership and support.

The success of any strategy depends on the people driving its implementation. In Mexico, I was lucky to work with a team of bright and passionate public officials who were eager to make government work better, even as we often met challenging and adversarial situations. While we worked to implement a mandate to improve the governance and availability of government-held data, we focused on socializing the tangible benefits agencies would see if they committed to this kind of change, rather than relying on the logic of compliance. We offered training across all federal government agencies and saw the best outcomes in departments where we seconded multidisciplinary units to work in partnership with agencies that had political acumen but lacked technical skills. This presented a ripe opportunity to further cement a community of support and knowledge-sharing across departments that otherwise worked in siloes. As the US government seeks to create enduring practices to manage risks from rights and safety impacting uses of AI, such an approach will be critical — from the appointment of each agency’s chief AI officer to getting civil servants to buy into the benefits of the new challenge at hand.

Having core competencies and timely access to infrastructure within government can help mitigate the risk of corporate capture.

“Human-in-the-loop,” in the context of AI and other machine learning systems, ensures that people verify and correct the outputs of technological systems. It is similarly critical in government, where having a human in the loop should translate to hiring and empowering people who have the capacities and duty of care to make these tools work for their intended purpose. Outsourcing this work effectively keeps core competencies and capacity out of the loop, and exposes the government to challenges of vendor lock-in. This is particularly important when a policy encourages the adoption of new technologies in the name of innovation. For this to become part of a broader shift in public interest technology, governments will have to decide how to uphold fundamental principles, like those outlined in the Biden administration’s Blueprint for an AI Bill of Rights, while driving innovation, ensuring that the federal workforce has the core skills and control of infrastructure to align with that strategy.

Tech skills alone are not enough: The government must create interdisciplinary spaces where people from technical, social, and political backgrounds are able to ask probing questions and seek the perspectives of people with diverse lived experiences.

Many have argued that what the government is lacking is tech talent — but on their own, tech skills are insufficient. I saw this clearly when my team brought together a task force of sector-specific experts who were working with technologists to pilot programs that would help agencies achieve commitments in the UN’s Sustainable Development Goals, like reducing the maternal mortality rate. In partnership with UNICEF and independent research institutes, we rolled out an open source framework designed to enable pregnant women to send and receive vital information and health advice through basic mobile phones. Qualitative feedback from women and doctors — and importantly, having the ability to assess impacts based on empirical evidence and adapt the software — enabled responsive and timely improvements to the program even while operating in a dynamic context. The AI executive order mandates rigorous pre-deployment testing for rights and safety-impacting purposes; testing AI systems in a real-world context and conducting ongoing monitoring can only be truly successful in these kinds of interdisciplinary environments.

While government officials should rightly focus on implementation and delivery, we all have a role to play in building lasting infrastructure for long term sustainability (or what is called policy “irreversibility”). Beyond government, we need a strong and ongoing push from civil society to ensure that the government is held accountable for its commitments and to continue to urge further legislative action. With the new executive order and its companion OMB memo, civil society organizations should seize the opportunity to voice concerns, submit regulatory comments (including to the OMB), and collaborate with agencies to shape governance standards. The real weight of the executive order will be seen in the strategy of delivery that succeeds it and the lasting infrastructure we should all collectively contribute to building.