Frequently Asked Questions about AI at UW
(Last update: February 6, 2026)
In November 2025, I took on the role of Vice Provost for Artificial Intelligence at the University of Washington. At that time, the Provost’s communications team conducted an interview with me in three parts (1, 2, and 3), focusing on my views of AI and its role. Since then, a lot more questions have come up. I’ve attempted to address some of them here. The content here is a frank discussion of my own thinking on topics that have come up so far, in the spirit of sharing openly with my colleagues. None of what is on this page should be taken as an official UW position or policy.
What do you mean by “AI”?
“Artificial” means done by computers; “intelligence” is a trickier concept, and there isn’t a clear, agreed-upon definition among scientists.
One way to think about intelligence is through tasks. People perform a variety of tasks requiring some level of intelligence, ranging from the simple (such as making a dinner reservation) to the complex (such as researching and writing a book). These can often be broken down into smaller tasks in which we perceive information visually, understand or speak language, make a plan, or draw inferences to predict the future. AI research has focused on getting computers to carry out many such tasks, from small to large, simple to complex. In the past few years, there’s been a move toward combining these abilities in a single system. “AI” is also sometimes used to refer to specific systems that have been built to have such abilities.
That way of thinking might lead someone to see everything computers do as AI, but that is not the case. Though there’s not a hard boundary everyone agrees on, there are two key features that help us distinguish between “AI tasks” and other computational tasks.
First, AI tasks are often considered “intelligent” because we can imagine humans doing them well; non-AI tasks are usually ones that (most) humans are terrible at. Examples include storing and retrieving massive amounts of data, performing lots of mathematical operations quickly and accurately, playing audio and video files, and many other things people have historically used computers for because we could never have done them ourselves.
Second, AI vs. non-AI systems are often distinguished by how difficult it is to mathematically formalize the problem and/or the correctness of the solution. It’s easy for a human to describe and carry out a task like “find the cats in this image,” and to check the correctness of an AI system that tries to complete the same task. But it’s exceedingly hard to write down, in a formal, mathematical way, an algorithm for finding cats in pictures. These examples give you some sense of what experts see as separating AI from other computing abilities, but not the whole story.
“AI” commonly refers to the design, implementation, and evaluation of computer systems that perform tasks we think of as requiring intelligence. These activities also raise interdisciplinary questions about design, use, impact, and governance. In the context of this FAQ (written in February 2026), AI is often exemplified by language models, often “large,” often with multimodal abilities (i.e., input and/or output could be images or sound), often augmented by access to web search engines and other software. Language models are mostly what’s under discussion here, but I anticipate that new developments will continue to shape the use of the term.
While “artificial intelligence” is a term that has been widely adopted in marketing and popular discourse, it’s important to recognize that its received meaning has been shifting ever since it was introduced in the mid 20th century. That’s not a flaw unique to this phrase. It’s how language works in any human society. As technologies, institutions, and cultural anxieties change, communities naturally stretch old words to cover new realities, argue about boundaries, and eventually settle (temporarily) on new defaults. The term “AI” is going through that ordinary process of semantic drift, and the most productive response is to be explicit about what we mean in a given context, rather than discarding the term in frustration or insisting that the term must have one fixed, timeless definition.
Why is UW doing anything here? Why do we need a VP for AI?
UW is taking these steps because AI tools are rapidly becoming part of how knowledge work gets done in many fields our students will enter, and in many settings they’re already using today. The UW’s role isn’t to mandate adoption. It’s to build grounded, practical AI literacy so students, faculty, and staff can talk about these systems clearly, evaluate claims about them with evidence, and make thoughtful choices about when to use them, how to verify outputs, and when to opt out. Doing nothing would leave our community to navigate a high-impact technology by rumor, fear, and uneven access. That’s exactly the opposite of what a leading research university should stand for.
Are you forcing “AI” (usually used to refer to language models) into UW classrooms?
No. It’s about informed choice, not institutional requirements. We are not forcing language models or other technologies into anyone’s teaching. Faculty autonomy in pedagogy is sacrosanct. What we are doing is offering support so instructors can make informed choices that fit their learning goals, and can clearly explain the reasoning behind those choices to students. If we protect faculty autonomy, and faculty make different choices, then it’s fair for students to ask that faculty communicate about how we reach conclusions about how language models should be used in our classrooms.
What should students be able to do?
Students should develop discernment: knowing when it makes sense to rely on AI, when and how to verify its outputs, and when to opt out. They should be able to reason about what modern AI systems can and can’t do, evaluate claims about them with evidence, and use (or refuse) these tools thoughtfully rather than defaulting to hype, fear, social pressure, or convenience.
Some students have negative sentiment toward language models and do not want AI to intrude in education. Are you ignoring that?
Not at all. We must engage student (and faculty and staff) concerns about privacy, labor and creativity, environmental impact, and concerns about truth, evidence, and the reliability of knowledge. The goal is not to “sell” language models (or other AI systems) or require their use; it’s to empower faculty and students to navigate a changed landscape thoughtfully. That includes helping instructors communicate clear expectations and reasoning to students, whatever they decide. It also includes enabling everyone to learn what approaches are effective for learning, and encouraging creative thinking that will shape what “AI” looks like in the future (rather than just reacting to the current state of the technology).
As educators, shouldn’t we protect students who reject AI on principled grounds?
We should take those positions seriously and create learning environments where students can succeed, without coercing them to act against their principles. At the same time, students will benefit from understanding where these systems are used, what benefits they offer, what risks they pose, and how to engage responsibly with workplaces and institutions that may adopt them. The tools we call AI are likely to show up in our students’ careers and lives, so we should be honest that avoiding them entirely could come with tradeoffs. We owe students something more than a simple “opt out”: we should help them develop the capacity to reason well about the tradeoffs, understand what these systems are (and aren’t), and participate in constructive debate about responsible use. Supporting students doesn’t mean avoiding hard conversations; it means having them respectfully, with evidence and nuance.
I’d add, too, that we can’t ignore students who believe that being prepared for life after university will require learning to work with AI. In computer science, it’s becoming clear that those who can use AI effectively to support complex work, while still verifying and taking responsibility for the results, may sometimes have advantages over those who cannot. I hear from colleagues with many kinds of expertise that variations of this pattern are emerging in many fields, from research and data analysis to drafting and revision.
What about misuse?
Misuse happens, and it can undermine learning. But “some students will misuse tools” is not new; what’s new is the scale and accessibility of the tools. That’s exactly why we’re focusing on informed instructional choices and best practices: what kinds of assignments are resilient, what guidance helps students learn, and what approaches are effective (or ineffective) if an instructor chooses to incorporate AI in some way.
Because there isn’t a single best approach, our aim is to connect instructors with evidence-informed options and colleagues who have thought deeply about them. The AI@UW launch event brings one such expert, José Antonio Bowen, to campus and convenes faculty workshops and peer-led sessions, helping faculty to learn about and choose approaches that fit their course goals.
There is concern about environmental harm from AI and data centers. Is opting out the most ethical response?
Environmental impact is a serious concern, and I share it. In my research, I’ve worked on “Green AI” questions and believe we need better measurement, transparency, and mitigation. I’m proud that our Olmo effort is highly transparent in reporting the impacts of building language models (see section 6.5 in the Olmo 2 paper). The landscape is also evolving quickly, both in how impacts are measured and how systems are built and operated.
Opting out is one response, but there are others. Everyone ought to look at the issue more broadly: data centers support many modern services (streaming video is a straightforward example), and ethical responses can include requiring accountability and disclosure, working toward efficiency, and making responsible procurement choices, not only individual abstention. When I discuss these matters with students and others with environmental concerns, I seek to respect the underlying values while also challenging everyone to think rigorously about which actions actually reduce harm.
Will relying on AI weaken students’ (and faculty’s) skills and make it harder to judge the quality of AI-assisted work?
This is a real concern, and the answer depends less on the existence of the tools than on how we use them. Any powerful tool can become a crutch if it replaces practice on core skills; it can also be a scaffold that helps people learn faster and attempt more ambitious work. We’ve seen this pattern before with calculators, search engines, and spellcheck: they changed what we practice, what we assess, and what we treat as baseline literacy. AI may amplify these effects because it can generate fluent work products; how different it is in practice from those other examples will be debated for some time.
I think the right educational response is neither full embrace nor full prohibition. It’s to be explicit about learning goals and to design activities that keep students doing the hard parts that matter: formulating questions, making and defending claims, showing intermediate reasoning, checking sources, testing code, and reflecting on uncertainty and limitations. In many contexts, using AI responsibly should increase the emphasis on judgment, because the user remains accountable for verifying outputs and deciding what to trust. If we do this well, discernment becomes a learnable skill and students remain responsible for the thinking.
What about copyright and the use of creative works to train some AI models?
This concern applies most directly to many generative AI systems, especially language and image models trained on large datasets, not to every use of “AI.” It is an important and contested area, with ongoing legal and policy debates. It’s reasonable for faculty and students to worry about whether creators’ rights and livelihoods are being respected, and it’s also reasonable to want clarity about what is ethical even when the matter may be legally unresolved.
From a UW teaching-and-learning perspective, my practical stance is: (1) we should acknowledge uncertainty; (2) when we have choices, we should prefer tools and datasets with clearer permission, licensing, and provenance; and (3) we should model good scholarly norms: attribution, respect for intellectual property, and transparency about how tools were used. For classroom practice, that often means avoiding requirements that push students into opaque, hard-to-audit platforms; not uploading copyrighted materials without permission or sensitive materials without privacy guarantees; and being clear about what kinds of AI assistance are permitted, what must be credited, and what students remain responsible for producing themselves.
I don’t just support efforts toward better disclosure and accountability in AI model development; I lead a team whose goal is unusually strong openness and documentation in model development (more about those efforts here).
———
Thanks for reading; I expect I’ll continue adding to this document as I talk to more members of the UW community.
