Frequently Asked Questions About AI at UW

In November 2025, I took on the role of Vice Provost for Artificial Intelligence at the University of Washington. At that time, the Provost’s communications team conducted an interview with me in three parts (1, 2, and 3). Since then, more questions have come up. I’ve attempted to address some of them here, focusing mostly (but not exclusively) on teaching and learning. The content here is a frank discussion of my own thinking, in the spirit of sharing openly with my colleagues. None of what is on this page should be taken as an official UW position or policy.

What do you mean by “AI”?

“AI” commonly refers to a rapidly evolving set of computational methods and socio-technical practices that enable machines to perform tasks associated with human intelligence, while raising interdisciplinary questions about design, use, impact, and governance. In the context of this FAQ (written in February 2026), AI is often exemplified by multimodal language models augmented by access to external tools like web search engines and other software, and that’s mostly what’s under discussion here. I anticipate that new developments will continue to shape the use of the term.

While “artificial intelligence” is a term that has been widely adopted in marketing and popular discourse, it’s important to recognize that its received meaning has been shifting ever since it was introduced in the mid 20th century. That’s not a flaw unique to this phrase. It’s how language works in any human society. As technologies, institutions, and cultural anxieties change, communities naturally stretch old words to cover new realities, argue about boundaries, and eventually settle (temporarily) on new defaults. The term “AI” is going through that ordinary process of semantic drift, and the most productive response is to be explicit about what we mean in a given context, rather than discarding the term in frustration or insisting that the term must have one fixed, timeless definition.

Why is UW doing anything here? Why do we need a VP for AI?

UW is taking these steps because AI tools are rapidly becoming part of how knowledge work gets done in many fields our students will enter, and in many settings they’re already using today. The UW’s role isn’t to mandate adoption or chase hype; it’s to build grounded, practical AI literacy so students, faculty, and staff can talk about these systems clearly, evaluate claims about them with evidence, and make thoughtful choices about when to use them, how to verify outputs, and when to opt out. Doing nothing would leave our community to navigate a high-impact technology by rumor, fear, and uneven access. That’s exactly the opposite of what a leading research university should stand for.

Are you forcing “AI” (usually used to refer to language models) into UW classrooms?

No. It’s about informed choice, not institutional requirements. We are not forcing language models or other technologies into anyone’s teaching. Faculty autonomy in pedagogy is sacrosanct. What we are doing is offering support so instructors can make informed choices that fit their learning goals, and can clearly explain the reasoning behind those choices to students. If we protect faculty autonomy, and faculty make different choices, then it’s fair for students to ask that faculty communicate about how we reach conclusions about how language models should be used in our classrooms.

What should students be able to do?

Students should develop discernment: knowing when it makes sense to rely on AI, when and how to verify its outputs, and when to opt out. They should be able to reason about what modern AI systems can and can’t do, evaluate claims about them with evidence, and use (or refuse) these tools thoughtfully rather than defaulting to hype, fear, social pressure, or convenience.

Some students have negative sentiment toward language models and do not want AI to intrude in education. Are you ignoring that?

Not at all. Student concerns about privacy, labor and creativity, environmental impact, and concerns about truth, evidence, and the reliability of knowledge, are real and worth engaging. The goal is not to “sell” LMs or require their use; it’s to empower faculty and students to navigate a changed landscape thoughtfully. That includes helping instructors communicate clear expectations and reasoning to students, whatever they decide. It also includes enabling everyone to learn what approaches are effective for learning, and encouraging creative thinking that will shape what “AI” looks like in the future (rather than just reacting to the current state of the technology).

As educators, shouldn’t we prioritize and protect students who reject AI on principled grounds?

We should take those positions seriously and create learning environments where students can succeed, without coercing them to act against their principles. At the same time, students will benefit from understanding where these systems are used, what risks they pose, and how to engage responsibly with workplaces and institutions that may adopt them. The tools we call AI are likely to show up in our students’ careers and lives, so we should be honest that avoiding them entirely could come with tradeoffs. We owe students something more than a simple “opt out”: we should help them develop the capacity to reason well about the tradeoffs, understand what these systems are (and aren’t), and participate in constructive debate about responsible use. Supporting students doesn’t mean avoiding hard conversations; it means having them respectfully, with evidence and nuance.

I’d add, too, that we can’t ignore students who believe that being prepared for life after university will require learning to work with AI. In computer science, it’s becoming clear that those who can use AI effectively to support complex work, while still verifying and taking responsibility for the results, may sometimes have advantages over those who cannot. I hear from colleagues with many kinds of expertise that variations of this pattern are emerging in many fields, from research and data analysis to drafting and revision.

What about misuse?

Misuse happens, and it can undermine learning. But “some students will misuse tools” is not new; what’s new is the scale and accessibility of the tools. That’s exactly why we’re focusing on informed instructional choices and best practices: what kinds of assignments are resilient, what guidance helps students learn, and what approaches are effective (or ineffective) if an instructor chooses to incorporate AI in some way.

Because there isn’t a single best approach, our aim is to connect instructors with evidence-informed options and colleagues who have thought deeply about them. The AI@UW launch event brings one such expert, José Antonio Bowen, to campus and convenes faculty workshops and peer-led sessions, helping faculty to learn about and choose approaches that fit their course goals.

There is concern about environmental harm from AI and data centers. Is opting out the most ethical response?

Environmental impact is a serious concern, and I share it. In my research, I’ve worked on “Green AI” questions and believe we need better measurement, transparency, and mitigation. I’m proud that our Olmo effort is highly transparent in reporting the impacts of building language models (see section 6.5 in the Olmo 2 paper). The landscape is also evolving quickly, both in how impacts are measured and how systems are built and operated.

Opting out is one response, but there are others. Everyone ought to look at the issue more broadly: data centers support many modern services (streaming video is a straightforward example), and ethical responses can include requiring accountability and disclosure, working toward efficiency, and making responsible procurement choices, not only individual abstention. When I discuss these matters with students and others with environmental concerns, I seek to respect the underlying values while also challenging everyone to think rigorously about which actions actually reduce harm.