Follow BigDATAwire:

People to Watch 2025 – Alondra Nelson

Alondra Nelson

Harold F. Linder Professor, The Institute for Advanced Study







First, congratulations on your selection as a 2025 BigDATAwire Person to Watch! From 2021 to 2023, you were the deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy (OSTP). What was your greatest achievement in that role?

Overall, I am proud that the Biden administration took a distinctive approach to science and technology policy that centered on its benefits to all of the American public: their economic and educational opportunities, their health and safety, and their aspirations for their families and communities. President Biden’s guidance shaped our approach to climate and energy policy, development of the STEM ecosystem, expansion of healthcare access, and advancement of emerging technologies such as quantum computing, biotechnology, and AI.

When President Biden took office, artificial intelligence was becoming increasingly prominent in public discourse. There was growing excitement about AI’s potential to transform healthcare, improve climate modeling and accelerate clean energy innovation, and increase accessibility to government services. OSTP was working to establish the National AI Office and coordinate government use of these powerful technologies. However, we recognized that we must not confuse what AI can do with whom AI should serve—the fundamental purpose of this technology must be to benefit the public. Simultaneously, public concern was rising due to incidents where AI systems caused harm: parents wrongly arrested based on faulty facial recognition technology, people receiving unequal medical care due to flawed insurance algorithms, and homeseekers and jobseekers denied housing and employment opportunities because of discriminatory AI systems.

This was the context that led to the development of the Blueprint for an AI Bill of Rights, the first statement of the Biden administration AI strategy that balanced research and innovation with the people’s opportunities and rights. In developing the AI Bill of Rights, we led a year-long public input process engaging technology experts, industry leaders, and even high school student advocates to develop this framework. It represented the first articulation by the U.S. government of how artificial intelligence should be developed and governed to safely serve and empower humanity, improve people’s lives, and address potential harms. The AI Bill of Rights formed the rights-based foundation for President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and shaped a distinctively American approach to AI governance—one that embraces AI research and infrastructure development while establishing crucial guardrails to protect consumer safety and build public trust in these systems.

 

You have been instrumental in shaping the discussion around AI ethics and privacy. How do you see that discussion shaping up in 2025, as enterprises begin to take their GenAI proofs of concept into full-blown production? Do you think industry has adequately addressed the concerns around AI ethics?

No, I don’t believe most of industry has adequately addressed the need for AI guardrails or fully embraced the practices needed to make this happen. While some companies developed thoughtful governance frameworks during the previous administration’s push for responsible AI, such as the voluntary commitments that leading technology companies made to ensure that products are safe before they are released and building AI systems that prioritize security and privacy. We’re now seeing a regulatory pendulum swing that has reduced pressure on enterprises to implement robust safeguards. 

Vice President Vance asserted in Paris at the recent AI Action Summit that concerns about AI safety are mere “handwringing” that could somehow limit American companies’ ability to innovate and dominate the market. This is a fallacy. There’s no tradeoff needed between safety and progress or rights and innovation. The history of our innovation economy shows us that guardrails, standards, and societal expectations drive developers to create better products that are more useful and less harmful. Consider how aviation regulations brought us safer jet travel. Now, contrast this with AI being used in air traffic control, which the Trump administration is discussing implementing within weeks, with few details available for public scrutiny, especially concerning given how generative AI currently produces inaccurate responses and nonexistent images.

We’re seeing some companies retreat from their previous commitments as the AI priorities of the second Trump administration emerge. For example, several organizations that had established pre-deployment review processes have scaled back these initiatives. Without new legislation from Congress, we now observe major tech companies calling for looser standards, echoing messages from the White House.

However, some enterprises say they continue to prioritize safety, rights, and public trust despite this political shift. Many recognize that building responsible AI isn’t just about compliance—it’s about adoption, creating products that consumers and business partners will trust. While regulatory requirements fluctuate, public expectations for AI that minimizes harm continue to grow.

 

GenAI was released to the public in 2022, and Geoffery Hinton and others warned in 2024 that it could destroy humanity. But few people are sounding that alarm these days. Has the danger of AI passed?

Concerns about the risks and harms of AI preceded the commercial release of ChatGPT and only rose after this, including with the March 2023 open “pause” letter. The danger has not passed at all. There are, fundamentally, two kinds of dangers: those that we could imagine in the future, and those that exist now and are impacting people’s daily lives. We already know quite a lot about the second kind: algorithmic biases are unfairly denying people mortgages; deepfake images being used to harassing and terrorizing young people online; and and AI systems providing incorrect information that leads to consequences ranging from voters going to wrong polling locations to patients receiving improper medical advice.

The first kind of danger — an artificial general intelligence turning against humans — I put that in the category of an industry talking point. They want us to believe that the technology is smarter than we are so we are confused about how to rein it in. Many of the people promoting that view stand to make very substantial profits from unrestricted development. They could also make very substantial profits from thoughtful and safe AI development, because more people would want to use their product.

 

How do you balance the risks of AI with the opportunity?

I had the opportunity to address that question in a private working session of world leaders, hosted by President Macron in Paris in February. While I spoke about the threats of artificial intelligence, about the ways this technology can perpetuate discrimination, threaten security, and disrupt social cohesion across continents, I also closed with a word of hope:

The printing press didn’t just print books – it democratized knowledge. The telephone didn’t just transmit voice – it connected families across great distances. The internet did more than link computers – it created unprecedented opportunities for collaboration and creativity.

We have the tools to guide AI to work for all of our people.

… If we advance thoughtful governance, we can ensure AI systems enhance rather than diminish human rights and dignity.

We can create systems that expand opportunity rather than concentrate power. We can build technology that strengthens democracy rather than undermines it.

 

What can you tell us about yourself outside of the professional sphere – unique hobbies, favorite places, etc.? Is there anything about you that your colleagues might be surprised to learn?

Outside my professional sphere, I’m an avid science fiction enthusiast. I love both reading classic and contemporary sci-fi novels and watching thought-provoking science fiction films and series. These narratives often explore the very technological and ethical questions I grapple with in my work, but in ways that stretch the imagination and challenge our assumptions.

I also find tremendous value in long walks, whether navigating city streets or hiking through nature. These walks provide essential thinking time and perspective that help balance the intensity of policy work and academic research.

When I can, I prioritize travel with my family.


BigDATAwire