fbpx Brand Name

Ep12. Design for AI with Dan Saffer

Blog Career development Content design Design leadership & strategy Featured featured-podcast Podcast

Dan Saffer is a UX design leader, author and assistant professor at CMU Human-Computer Interaction Institute.

Dan’s work has directly influenced my career development as such this conversation will also be extremely valuable to buddying and leading user researchers, designers and product managers.

We discuss everything AI: the industry, the good and not-so-great examples, principles that UXers and product folk should keep in mind and much more.

Listen to the episode

Also available on all major podcast platforms and Youtube.


The following notes are automated with AI; thus, they contain incorrections

V (00:00.598)
we are jumping on a deep end. But what I usually do is really ask people to, I’m well aware of you, I guess, but the new people in the industry might not be. Let’s say I often, even with the people I coach or manage or talk to, use one of your frameworks, which is the UX disciplines one from, I think 16 years or so, but still.

Dan Saffer (00:24.094)
Like, oh. Yeah.

V (00:29.458)
I applied so often and I think people benefit to understand, okay, this is the complexity. But on that note, would you mind giving a bit of a story of like, how did you even, you know, figured that design was the thing you wanted to pursue in the beginning? Because that’s kind of the things which people tend to overlook and jump straight into like current stuff. But I feel like the origin story is always intriguing.

Dan Saffer (00:57.626)
Yeah, well, it’s funny. My old design professor Dick Buchanan used to say that design is like California, that no one is born there. And I really feel that for myself, like that I kind of stumbled into it because I was

I was an English major and a theater major and kind of humanities background. And that’s kind of where I thought I was gonna go more into kind of the writing side of things. And I ended up working at a book publisher at the time when the web was taking off. And…

It was one of those things where we’re like, Oh, well, do you have a browser? Oh yes, I have, I have a browser. Well, how would you like to work on our website? Great. Okay. So I ended up kind of self teaching myself design and, uh, some tech to set up the website, you know, it was kind of at the era where there was no books that were then written, so you had to learn, you know, you looked at source code and like taught yourself how to like.

build pages and set up a SQL database and all this other stuff. So I did that for a number of years. And then, uh, after like, you know, three or four years doing that, I was like, well, maybe I should probably be trained in this. So that’s when I went back to grad school and did grad school for two years. And then went to work at adaptive path right out of grad school. And then.

Worked at and started my own studio, worked a number of really great studios like smart design, worked at robotics, wearables, social media, worked at Twitter for almost four years, a bunch of different places that were all really interesting and now.

Dan Saffer (03:01.142)
I’ve been teaching design at Carnegie Mellon at the Human-Computer Interaction Institute for the last year. And so that’s been a big radical shift in my career, which so now I’m back to like doing things like writing and teaching again. So I’ve sort of come full circle.

V (03:23.094)
Yeah, is it as difficult to teach, I guess, upcoming students as to teach stakeholders? Because that I…

Dan Saffer (03:35.383)
No, it’s stakeholders are harder, I think, because stakeholders, stakeholders don’t necessarily want to learn how things work. They want to see results. They want to see the end product. They don’t want the process, which I get. But my students, you know, for the most part, really want to learn how to do things and how to look at things through a design lens.

And so in that respect, it is, it’s easier, but you have to go a lot deeper. You have to, you know, they’re, you know, for the most part, I’m training people to actually do things with, you know, with, with their hands and with their brains. And so having to teach them, having to teach them, not just the overall concept, but also how to then apply it. That’s, that’s the real hard part is like, oh,

You know, the concepts are easy or like, oh, this, you know, this doesn’t work or, but the actual application and making things with the concepts is what’s, is what’s the real challenge.

V (04:41.63)
Yeah, and I guess since you started doing it so quite recently, I think it’s been a year or so from what I’ve seen online, right? So congrats on that. But I feel like you kind of went from this, I guess, private sector, if I’m correct, from actual tech companies and the horrors there because, you know, this is what the bread and butter for UX is that you…

You don’t realize how messy the actual making is until you get into that. But like, I wonder what’s your kind of take? Are you trying to prepare the students to, you know, to those realities? Because the constraints are like ever increasing. And it’s almost like once you get to the actual industry, you realize, oh crap, like there’s the people element and egos and things of that nature. I wonder like, what’s your take?

Dan Saffer (05:38.182)
Yeah, I mean, one of the things I was really worried about before I started teaching was like, am I going to have enough hope to really inspire students to want to do this job and take this job? And as it turns out, the students give me a lot of hope. So that’s a real positive. Just their, you know, their energy, their ideas. But yes, I definitely feel like it’s my job not to

Certainly not to douse that flame or anything like that, but to definitely, you know, I feel like I’m like handing them like tools and weapons that they can use, you know, when they go out into the trenches of the tech industry to try to just be like, here’s what’s really happening right now. And here are the things that you’re gonna encounter in your job and…

here’s ways that you can start to overcome that. And it’s not just the other people right now as well, it’s also kind of the threat of AI taking our jobs. And so also kind of preparing them for like that world where it’s like, okay, you’re gonna have to work with AI and some of these low level jobs I think are just going to disappear. So how can you make yourself

AI proof and more strategic. Like how can you become like higher up the ladder almost as soon as you get at a place? Because within a couple of years, I just think some of the lower level production level jobs, the jobs where people hand you a JIRA ticket and you go and you fill do the JIRA ticket. I think those jobs are going away. I think the jobs that we are that we are traditionally better at.

Um, the more strategic jobs, I think those, those are going to stick around. And so I think my job is to help prepare people to think, think more strategically. They help designers think more, more strategically so that when they get in their workplace, they can start to, they can start to contribute things that AI is not going to be able to at least, at least not yet.

V (07:58.562)
Yeah, no, I hear you as well. And I feel like this is probably shared across, like I feel also some of that, maybe less fearful because I make it an active effort to consider, you know, like the limitations, but also like how could I automate myself out of my job, let’s say, if you make that exercise, you realize, yes, there is a lot of change, which is ought to happen.

But then I hear so many people like at the senior level, let’s say in a buff as well are, I guess, quiet afraid and it’s probably more so because of the tech shakeup, the messy layoffs, you know, that drama that like, like almost like, hard to describe how terrible the last like couple of years have been, especially for UX. But like, I wonder what’s your kind of take on that?

I feel like it’s AI is almost has been in my perspective, at least it’s almost like an add on to already very messy, you know, journey of like a lot of people are facing.

Dan Saffer (09:10.258)
Yeah, I agree. I mean, the layoffs that we’ve seen over the last two years have really nothing to do with AI at all. They’re they’re more about I mean, sometimes it’s I can’t even tell what they’re even about anymore. Is it is it over hiring? Is it is it just that?

Is it just that like everyone’s doing it and Wall Street is rewarding us? So therefore we should do it too, even though we have profits. Um, so yeah, so I think that there’s just been this terrible economic anxiety that’s happened. And then, yeah, and now there’s this, this other one that’s kind of coming at us from another direction that, you know, I think people are concerned about it, but you know, I’m not seeing a lot of like

UX jobs being replaced by AI right now. I don’t think that’s happening yet, but I do think that is something that we need to kind of keep, you know, keep our eye on, because I think that will happen within the next five years, if not sooner.

V (10:25.294)
you think next, why do you think that? That’s interesting, because I would challenge that, hey, this is probably, I mean, we’re probably going to climb the tasks, we’re going to automate specific bits, but I feel like people are going to adapt as we go, or hopefully going to adapt. That’s where I’m thinking, like, the full automation of what we do, I would hope would take like a decade, but what’s your take on why do you say five years?

Dan Saffer (10:53.698)
I would have agreed with you like a year or so ago, but just the, I feels like the pace of AI is very rapid and there’s just a lot of, there’s a lot of money being pushed at it. There’s a lot of resources being pushed at it. And I think that there are just things that are accelerating the amount of change that AI is.

is, is capable of like being able to build things on these, you know, these foundation models of AI so that you don’t have to create everything from scratch. Well, that’s, that’s a real accelerator. And, you know, once, you know, I mean, I think some of the, some of the issues that are, that prevented things from accelerating are, are going away. For example, just, you know, chips, you know,

actual hardware chips, like more people are getting into that game and more people are trying to have more processing power and so that alone is going to accelerate things. So you know, that’s why I’m thinking within five years there will be something that can do some pretty good UI designs with a UI library and you know

possibly pretty good flows and stuff too. So I’m thinking there will be things that can generate, a set of UI examples and be like, which one of these is good. And I think we’re five years out from that. I think it’s closer than we would like to admit.

V (12:38.972)
Interesting, yeah.

V (12:43.966)
Yeah, well, I’m actually kind of falling back to your UX disciplines framework a lot, I think, in my thinking at least, because it’s been a foundational in my learning, you know. But like, I feel like UI design there, at least, if you look at the topography, is just one app. Like, to do UX properly, you would need to start automating all the different bubbles or all the different areas.

And you know, those tasks have to overlap. And of course, it’s going to happen. I think it’s inevitable. Funnily enough, I think the other day, I discovered that it was in the community I run in this design squad. And someone posted this job ad, which was basically looking for people to join this mysterious firm who’s trying to basically log.

the Figma activities of a designer as they design. So you’re being paid per hour to train some sort of models of design decisions. And that kind of blew my mind because I was thinking it would be done more by proxy where a person would, you know, as an ethnographist, as a UX researcher, sit with you and capture your decisions and then try to train the model over time. But no, it seems like…

Dan Saffer (13:50.166)
range, right?

V (14:10.558)
there is a wave of people being hooked into this. And my hope still is that it’s a bit bigger than UI design. But yeah.

Dan Saffer (14:22.058)
Right, well absolutely I agree with you that, if you’re thinking of user experience as UI design, yeah, I mean, and I think that is the kind of crux of it. If you think of yourself as a UI designer, I think that level of production work.

may go away or be diminished or there’ll be a lot fewer. But I think if you look at UX as a whole thing, then yeah, like a lot of it’s not, a lot of it’s not gonna be automated. There’s a lot that, there’s a lot of, you know, there’s a lot of context that you need to understand to make a product. There’s a lot of common sense that you need to do to make a product. There’s a lot of, you know, collaboration you need to do to make a product.

You know, and those things are, and for designers, there’s a lot of, you know, tastes and aesthetic, aesthetic decisions that need to be done to make a product. So I think those are things that are not, that are not going to be easily automated. And, and you’re right. I think that there are tons of, there are tons of pieces of UX, you know, when I think strategy, I think information architecture, I think user research, you know, the insights, the, the big jumps that you’ll

that people will make. Those things are gonna be very difficult for AI to replicate. I think it’s more the things that I think AI can look at 20,000 different websites and say, okay, here’s the layout and navigation style of these. I’m gonna give you 10 versions of this, which one?

Which one do you think fits your context better? And say, well, okay, maybe that one, you know, maybe that one does. So I think, I, I think that that’s, that’s where, where I’m worried about. And that’s why. And I think a lot of those jobs are being done by, um, I mean, that’s kind of how you start in the industry is doing kind of these low, lower UI level problems. Versus the kind of harder UX.

Dan Saffer (16:45.654)
and more strategic problems. So that’s why I’m trying to push all my students to think beyond UI and just go deeper into UX because I think that’s the stuff that’s gonna stick around in the next 10, 20 years.

V (17:01.886)
Yeah, and also, I mean, that’s has been my positioning and I might echo what you said in a way, but one of the call-outs from, or one of the questions I recently got on one of the content pieces has been that UX might merge into product management or vice versa. Basically, there’s might be like, I guess,

A fluid transition between the skills, and it has been happening, I’m sure you would agree, where product managers now have almost like expectation, one of the skill sets to do UX. Or in most lean environments, if you had layoffs in tech, like product managers tend to do user research as well, like quite deep at that too, like, you know, some good product managers obviously can delegate. But there is also some of that.

Dan Saffer (17:41.461)
Oh, hi.

V (17:59.286)
But I wonder what’s your kind of take in terms of what the future UXer looks like? And is this something where you’re also maybe framing it for your students as well? Because I’m sure they are basically years or maybe semesters away from entering the market.

Dan Saffer (18:22.25)
Right. A couple of things there. So we, so I teach both people that are in, you know, undergrad kind of design HCI area. And then I teach master students in HCI who are mostly going to be designers. And then I also teach

students who are coming to us through the masters of science of product management. So I do teach a lot of product managers as well. And they are learning design. They are learning some design tools. They are learning how to think like designers. Now, do I think that most of them will become designers or replace designers? I’m pretty dubious about that because I do think that there are.

aesthetic product decisions that they will, that many of them will struggle with. That that is not gonna be their, that’s not gonna be their strength. That’s not gonna be their strong suit. That’s just not, that’s not a hat that they’re used to wearing. Whereas I do think that a lot of designers, particularly senior designers,

probably yes, may start to merge into some kind of, and may, you know, some of our graduates even do go into product management. They become product managers. And the more that the tools enable that, that they will be able to do, yeah, they will be able to do kind of both jobs, you know, thinking about the product strategically and doing things like user research and stuff like that, kind of add a.

strategic, trying to find the strategic fit for these products and having an aesthetic sense is, is I think a really powerful combination and what, what people end up calling that role, whether it is a designer or whether it is a product manager, you know, maybe we’ll see like a, you know, a DP, a design product manager the same way that we have engineering product managers or technical product managers.

Dan Saffer (20:43.71)
Maybe there’ll be something like that too. It’s hard to say, like, you know, I tell my students like, look, when I graduated college, the job that I had didn’t exist right now. And probably the job that you’re gonna have in 20 years, probably that doesn’t exist right now either. So who knows what kind of hybrid it is? Maybe it’ll be some kind of AI.

uh, trainers or something, you know, something like that. Maybe it’ll be like AI manager or something like, you know, something that is that we can’t even foresee yet. But, but, you know, I mean, you know, if you, if you kind of take out the. If you take out the titles, it’s like, where’s, where’s the important thing for people, what’s the important thing for people to know when it is at that kind of like product strategy.

understanding context, understanding the users, you know, uh, you know, and being able to rally people around a vision that stuff is what’s really important.

V (21:49.23)
And just the notion of, I think this is almost like a sidebar, but I think some of my guests on a podcast in UX have called out quite negatively so, or maybe pessimistically so, that they would not get into UX today. Which, you know, I never want to discourage anyone because I feel like you always need, you know, you’re always going to need

this type of human centricity and kind of design attention to a lot of different problems around, like it’s unavoidable. But I wonder what’s your take, like, how do you see the industry itself like, because we do have so many people who are seeking jobs in design, but there’s…

Dan Saffer (22:37.842)
Yeah, I mean, no, I had to really, I had to really rectify this as I was starting to basically teach people to become UX designers, you know, and be like, well, how do you know, would I recommend that people get into this field right now? And I had to say like, yeah, of course, you know, if they will be fighting very different battles, then

We fought 10 years ago or 20 years ago, or, you know, or the people that came, you know, before me that bought like 30 years ago or 40 years ago. Um, they’re, you know, their battles right now are all about how do we keep, yeah, how do we keep humanity into the products we’re designing and how do we keep, you know, how do we keep adding value to users and not just extracting more and more value out of them?

as I think a lot of businesses are kind of want to do right now. It’s like, well, how, you know, sorry, I’m getting worked up. I mean, you see things like, you know, BMW trying to charge people to have heated seats, you know, to have a subscription for heated seats. And it’s like, well, how much value can you really extract out of people? And.

before people are like, you know, enough, like this is, this is too much. Like where it ruins the product, it ruins the experience of the product. And people go elsewhere or they just, you know, or, you know, in the worst case scenario, they’re stuck using your product because they have no, they have no choice, it’s like part of their job or whatever. So I think that is, that’s what I’m trying to gear. That’s the world that I’m trying to gear my students up.

It’s like, how do you win in the kind of giving people values, giving people things that they value from the products and services that we’re making, versus just seeing people like human ATM machines that you can just extract more and more money and time and engagement out of. So that’s…

Dan Saffer (25:03.066)
It’s a tough world to be getting into. I think maybe even more challenging than previous things when it was just like, can we make this work? Can we get designed to be a respected profession? Because people are actually using our own tools.

against us and against our users, right? Can we make this gorgeous, beautiful thing that’s very addictive and very seductive and all it’s doing is sucking away your time and money and, you know, so, and that’s a hard thing to fight. That is a hard thing to fight. So, but I think we need people to fight it. I think we definitely, I mean, I think we need people to go in there and be like,

V (25:53.271)

Dan Saffer (25:57.278)
What are we doing here? Like, you know, like why, why is this like this? Can’t, you know, and kind of, I mean, these, these giant tech companies are not going away. So the real question is how can we, can we fight them from, from inside? Can we make sure that there are people inside who are able to, you know, keep, keep human beings in mind and, and be advocates for them and, you know, try to, try to change things for the good when they can.

V (26:27.378)
Yeah, yeah. And I feel like it’s also well, yeah, and mine too, I think, I think, you know, just, just to kind of add to it. I think there’s like, because I’m trying to almost like step back and see what the industry is doing. And right now, I just looked up the other day, the job openings and every, like every, every PM job right now is

Dan Saffer (26:29.668)
That’s my take on it.

V (26:56.43)
to do with AI and it’s clearly state senior product manager AI and that’s just UK specific. So it’s probably even more so in the States, I presume, especially like Silicon Valley. So every PM is going to be oriented in a very tech heavy tech centric, I guess, mind and ways to solve problems. My presumption hypothesis is that they’re going to focus less about the user centricity.

Dan Saffer (27:07.39)
Yeah. Mm-hmm.

V (27:25.122)
or, you know, kind of like, because it’s basically just hiring more tech PMs or technically savvy PMs, and a lot of them probably going to be engineered. Not saying it’s a bad thing, but to me, it’s like a clear signal that could create a lot of debt on user centricity, which eventually might, the pendulum might swing back where we might get so much demand for UX that we just

Dan Saffer (27:44.424)

V (27:55.162)
We don’t have enough hands to basically go over and solve all of it when the rewards your take.

Dan Saffer (28:00.818)
I’ve, you know, from your mouth to God’s ear, like, you know, I hope that is the I hope that that’s the eventual case that people will be like, wow.

that we really need that. We talk a lot at CMU. My colleague, John Zimmerman talks a lot about the, what he calls the innovation gap. And the innovation gap is all about data scientists, engineers, and these technical PMs are proposing these AI projects that are technically challenging and very, and very…

very interesting from a tech point of view, but less interesting and less desirable from a user point of view. They’re like, oh, this is really interesting and really hard. And I mean, the opposite side of it is that designers are proposing things that are very desirable, but also can’t be built. So how do you bridge that gap is a lot that we talk about in our design and AI class. Like, how do you figure out ways to

Ways for designers to understand what can be built and ways for the tech people, especially data scientists, to understand what people actually want and where you can find that kind of middle ground. Um, so yeah, so I think what you’re seeing is, yeah, a lot of people being put on these, you know, very hard tech problems that, uh, something like 80 or 90% of all AI projects fail. That’s, that’s the.

That’s the statistic that you’ll see in like Harvard Business Review and stuff like that. That’s the research that we’ve done is all like 85 to 90% of AI projects fail. And it’s because yeah, they’re trying things that are very difficult. So they’re very high risk. If you don’t get them perfectly, there’s a lot of like trust lost. Damage can be done like financial damage, emotional damage, all this other stuff.

Dan Saffer (30:09.21)
And yeah, and then on the, and so yeah, and so design is either left kind of like, like is not involved at all, or they come into, you know, Hey, make this pretty, you know, the, you know, kind of like old school lipstick on a pig style design. Um, when they should be more involved, you know, at the beginning of these projects to be like, yeah, we’ll, does anyone actually want this? Is this, is this actually valuable to people?

And what are the risks that we need to mitigate if things go wrong? If the AI is guessing wrong things, what should we be doing? How should we be mitigating that on the front end? Are there things that we could look out for that we could design away?

But yeah, you’re right. I mean, most of the, you know, there’s a million jobs in AI right now, and very few of them are in design. Um, all the design jobs seem to be like in like growth right now, like, like kind of growth hacking kind of stuff, which to me is, you know, some of it. Yeah. I mean, some of it.

V (31:14.866)
Yeah, it’s telling of the times, right? Like it’s kind of the climate as well.

Dan Saffer (31:23.066)
Yeah, I mean, some of the growth is fine. Some of it is like, you know, hey, let’s fix the onboarding process and stuff like that, great. But some of it is definitely, you know, it is UX in name only. It is, again, using the tools of UX completely for business to get more clicks or drive more engagement, those kinds of things, which…

You know, is, you know, I don’t want to, I don’t want to certainly downplay like business value because I think that’s a thing that we should do. But I think, I think in this era, a lot of, um, user value is being lost. I think this is the whole kind of, you know, and shitification, uh, concept that Corey Doctorow talks about where, you know, you have all these, all these products that, you know, they’ve been around for a long time and they’ve just gotten worse.

over time because yeah, they’re, you know, they’re, they’re breaking or they’re, you know, they’re, they’re just, they don’t, they don’t provide enough value back. And the experiences is bad.

V (32:33.866)
And I do, it does resonate so much with my thoughts too as well, because I feel like we as designers also are a bit to blame for it because efficiency has been one of the keywords where every designer for a last, I guess, decade almost was focusing so heavily on reusable components, design systems.

operational improvements, the rise of the design operations discipline, like all of it was, I guess, all of it was businessified or, you know, like for us to run as smooth as possible and as predictable and as measurable and as fast as less costly. And I feel like that has to do a lot of with that and gentrification of the things because you end up with…

Dan Saffer (33:21.139)

V (33:30.434)
a pick and mix of source where people can just go and independently self-serve and create experience, which might not be as tested, researched, validated, human-friendly, like all of those things. That’s where my kind of, I guess, spicy take is that a lot of the people are actually…

V (33:54.894)
taking themselves out of their roles eventually, especially because a lot of that information can be used for AI training too, if we get back to that topic. But you know, I feel like it’s almost like a double edged sword which we carrying and it does kick back once in a while. And I feel like, you know, you have to be kind of so careful to what do you look at and where do you sell your time? Because you know, you eventually exchange it to possibly not ideal.

Dan Saffer (34:04.043)

V (34:25.582)
I would take.

Dan Saffer (34:27.038)
Yeah, it’s funny that you mentioned that because we, uh, I’m, I’m literally this week teaching my advanced interaction design students, design systems. And that’s one of the things that I’m saying to you like, Hey, you can make these great looking UIs that are terrible. You can make things that look awesome, but are terrible because there’s no thought behind them. There’s no, there’s no strategy behind it. No one wants that.

And, um, and yeah, and the UX is just bad. So it makes it very easy to like skip over the actual design process and go straight to the, the interface making a part of it. And so I don’t, yeah, I think you’re right. I think it has definitely made it. Um, it’s operationalized it to the point where it is, it is so easy that yeah, you’ll, you will see.

Um, you will see people that are in, uh, PM roles or executive roles. Oh yeah. I just went into the figma, you know, thing and snap this thing together and here it is. Um, can we just build this? And then say, well, you know, okay. Um, yeah, but this doesn’t, yeah, this doesn’t work or this doesn’t meet user needs or, you know, what were we even trying to accomplish here?

V (35:51.69)
Yeah, yeah, it takes a bit of a skill. Everybody can play with Legos, but not everybody can construct like a Death Star of a proper design experience, so to speak. But that’s awesome. Are you OK to talk about your book? Because some of those themes I saw in your chapter you published on your newsletter, very interesting too. But are you good for us to dive into it a bit?

Dan Saffer (35:56.35)

Dan Saffer (36:19.254)
Sure, yeah, so I’m working with John Zimmerman and Jody Frelizzi here at the HCI. And really, I’m coming at it as like someone who’s done some AI work, but they are like, they have been researching AI and design, they and their PhD students for like the last 10 years. So they’re like the experts, they know like.

everything and they’re kind of like, oh, well, welcome to the party. We’ve been here for a while. And so they’re like, well, here’s what we know. Here’s all the good stuff that we’ve been researching for a long time. And the book is kind of about how do you actually look for situations where AI is valuable.

but only with kind of moderate to immediate moderate level of performance. So what we mean by that is like, how do you find these sweet spaces where if AI guess is wrong, it doesn’t mean, oh, you were accidentally diagnosed with cancer or, oh, you know.

a self-driving car smashes into a pedestrian. Like, how do you find these sweet spots where you can do like these small things that are really valuable to people, but are also low risk? So a great example of this is the transcripts on your phone, like your voicemail transcripts. Like they’re about, you know, 80 to 90% correct.

but it’s just enough that you can get the gist of it and be like, oh, maybe I should listen to this, or oh, that’s a spam call, like get rid of it. So how do you find those kinds of opportunities? And then how do you insert those opportunities into existing products or new products? And then how do you do these kind of consequent scanning and checks to make sure that, oh, hey,

Dan Saffer (38:41.874)
You know, if, if this does guess wrong, like, how do you mitigate those errors? Cause I think a lot of designing for AI is actually designing around the, the error possibilities for it. So that’s kind of what the book is about. So they they’ve been developing this kind of method for, yeah, like, like four or five years now, kind of putting all these pieces together. And so this, this is basically, um, the class that, that

they have been teaching and that I’ve been teaching for the last year, kind of in book form. Um, and so it’s really, it’s really a kind of counter to the, to the message right now, which is AI is like a super magical thing that’s going to do all these amazing things. And our, our take is no AI is actually kind of dumb, but it’s still valuable. So what can we use that?

V (39:25.71)

Dan Saffer (39:39.55)
that value for. So that’s kind of what the book is about, how designers and product managers can start to find those opportunities, and then how to talk to data scientists, basically, to figure out how to make these things real. And so there’s a lot of that kind of thing, too, how to work with data scientists and just how to speak the language of AI because

It’s so confusing. There’s just a lot of, there’s a lot of terminology like, oh, what’s deep learning and how does it relate to machine learning? What NPL is? Gen AI, MP, you know, NLP, you know, even, even I’m messing up the, the accurate right now, but it’s, it’s like, there’s a, there’s a million different things like, oh, what, you know, how do we measure performance? What’s a, what’s a feature? Oh, a feature is something, a feature is not what we think of as feature. The feature is something that’s pulled out of data.

V (40:19.585)

Dan Saffer (40:35.05)
What’s that? You know, so there’s a lot of terminology. There’s a big terminology gap too that the book is trying to help overcome.

V (40:44.822)
And that’s perfect. I think it’s well needed as well, because I don’t find enough resources online on anything AI. Or it’s quite superficial. It still puts designers, I guess, in a back seat instead of being like a co-pilot or not a driver. I guess you cannot be a driver because it’s still so technical, but influence is well needed. Like…

Dan Saffer (41:02.014)
You’re welcome.

V (41:13.766)
You know, so I feel like it’s well needed. But what is your, I guess, take on like, how could people, you know, before we even would reach to that book or they would actually get it, like, how could they position themselves better? Do you have any advice for, I guess, UX designers who are potentially, you know, dabbling or maybe trying something out? Like I’ve seen some of the cases in my community where people, I guess,

end up reskinning really the chat GPT or embedding it as a chatbot or as a conversational design piece in some sort of app and then I guess calling it a day but do you have anything which anyone could use you know right now or get better with working with those things so because ultimately they are made in a way might be producing bad UX and you know

Dan Saffer (42:08.882)
Yeah, I mean, some of the things that I do that maybe are helpful to other people. So I subscribe to like three or four like AI newsletters. And so I just get these newsletters that are mostly daily newsletters. And they have definitely helped me to like just get used to the terminology for one thing. And two, it lets me know kind of what the state of the art is. Like what is possible?

Um, so that when, when I want to suggest something that uses AI, I have an idea of, oh, this is possible, or this is, this is, this is easily done versus this is something that is cutting edge, like, you know, you know, on the edge of what is possible, you want something that’s like, Hey, I know that we can easily do this. So one example here is, um,

Uh, like predictive text is a great example. Like why, you know, why doesn’t Instagram have predictive text for hashtags? Why do, why do people have to constantly write every hashtag out by hand when you could have predictive tech or Twitter? Same deal. You know, there’s no, um, you could easily, that is something that that’s a capability that is very well tested. Very easy to make. Um, you could do it.

you know, it’s not hard and it would be valuable to people. So being able just to understand the Cape, I mean, we talk a lot about like caring less about what we call the mechanisms of AI, which are like the different kinds of models, like diffusion models or, you know, those kinds of things, those kinds of optimizers or, you know.

caring less about that stuff, which is kind of the purview of data scientists and caring more about capabilities. And it’s like, well, we know that AI is getting pretty, is pretty good. About being able to predict the next, next couple of words of text. And even if it doesn’t do it well, it, it’s not that big of a drain, you know, it’s not that big of a risk, you know, once you take out some, you know, the offensive words and other things like that, it’s, it’s not that big of a risk to be able to.

Dan Saffer (44:32.554)
to add it and have it be valuable in places. So that would be what I would do, kind of if I was like on the, if I was like, how do I start getting into this? Is really start looking at kind of industry newsletters around AI and just seeing like what’s being proposed and just, you know.

So like today, for instance, I just saw a thing that Chrome is offering some new AI things built into Chrome, which are collecting, clustering tabs to make it easier to navigate Chrome. And I’m like, that’s really interesting. That could really be an interesting feature. Now, I don’t have it yet, cause it hasn’t rolled out to me yet, but I’m like a tabaholic. I have like,

V (45:28.802)

Dan Saffer (45:28.842)
30 or 40 tabs open at any given time. And so I’m like, Oh, that could really help my, that could really help my workflow if done well. And so looking for, and so looking for things like that, that like our announcer and like, huh, well, if it can cluster tabs, that means that the capability is it can understand kind of what the context is. And so how might I apply that to the, whatever product I’m working on? Like.

We know, for instance, AI is great about creating lists based on various different kinds of clusters and stuff like that. Well, how could you use that in your product? Like, are there places where that might work well? So it’s just kind of thinking about AI at the capabilities level that will actually help.

We’ll help designers because they could go and be like, hey, did you see that, you know, Chrome clustering thing? What if we did that, you know, with, uh, our, you know, I don’t know, media or, you know, tasks or something like that, that we’re like, Hey, in our software, we could use a similar thing. What if we did that?

V (46:48.859)
Yeah, I feel like it’s such a good helpful note too, because especially I guess people who are just coming into UX, they feel like they have to be… It’s kind of like there is some sort of eureka moment and you come up with something which works or you just copy, but there is also this immense, I guess…

like a virtual folder you create of things you know or other solutions which are done or the art of possibilities and then you let it converge because the best ideas as well like working with, let’s say myself, I worked with a lot of data scientists for years now and we worked on machine learning solutions and things of that nature. I kind of was very humbled because

I would try to drive too much in a way sometimes and I wouldn’t facilitate that exchange because everybody has their own takes on the things and it’s almost like the opposite challenge where you maybe don’t feel like you contribute enough and then you try to drive too much and the partnerships you can create but it does come out to that where you have to have enough of signals to then call it out.

and say, hey, I have this flag, maybe this is going to make sense, and then kind of converge with other people. Because people who implement also have usually a lot of OCDS, and the users too. I’m sure if Chrome did their research, which I hope they did, they probably talked to maybe Dan somewhere else, or someone like totally different person who maybe has the same needs or problems as you did, and then.

Dan Saffer (48:24.47)
I hope they did too, yeah.

V (48:36.294)
that down the road kind of flourished into this project, which is probably going to be useful, but you also never know probably until you test it.

Dan Saffer (48:45.542)
Right, I mean, if the, you know, there’s the concept, right? Like, oh, hey, why don’t we cluster tabs? And then there’s the implementation, which is like, wow, these clusters are absolutely useless to me. That, you know, they mean nothing. Like if the clustering is bad, then the product fails. Like, oh, well, I guess I’ll switch off that feature. Like, you know, I won’t use that.

V (49:10.436)

Dan Saffer (49:11.034)
Uh, but yes, I mean, it is, it is hard to, you know, especially when you’re starting out to kind of figure out to what, you know, to what level am I, can I, can I drive or can I push this, um, can I push this project? Because yeah, you don’t understand the, the limits of the model. You don’t understand like where the, you know, like how hard, you know, how difficult something is compared to something else.

So that’s where that’s where having an understanding of, you know, hey, I’ve seen this in a lot of places, this feels doable versus just, you know, pulling something magic out and being like, oh, well, what if the AI can schedule, you know, schedule and, you know, and air, you know, my airline flight with.

You know, I want $500 and I wanted to, you know, I want to go to this city and this, you know, like all these like, very complex things and say, well, that’s, that’s intensely difficult. And you know, snake.

V (50:16.81)
Yeah, could be a vision though, you know, as a vision type. Yeah. But you kind of would need to walk back and see what makes sense. One of the things which I saw in, you know, you shared the chapter on your newsletter and I think one of the quotes, if I could pull out was very, very topical and I think it would be interesting to hear your thoughts, but you, you said, and of course I’m going to paraphrase this. So pardon if I didn’t take it right,

Dan Saffer (50:20.839)
It could be, sure.

V (50:47.122)
Interestingly, successful AI projects like, and this is you quoting, I think, Airbnb example of smart pricing are usually more accidental than intentional. In fact, almost all AI projects fail, as you mentioned before. Most fail quietly before launch. Why is that the case? Like what have you, I guess, experienced?

Dan Saffer (51:11.69)
Will the, most of them fail just because, you know, either the data is bad, the model is bad, and if you do user testing on it, people hate it. You know, there’s all different kinds of reasons that an AI thing could fail. Like, you know, it’s too expensive. Like, oh, we built this thing, but we’ll never be able to get our money back on it. You know, there’s…

V (51:41.942)
That’s so typical.

Dan Saffer (51:42.414)
So difficult. Right, yeah. I mean, in my Design for AI class right now, we’re actually talking about money. Like, because people were like, oh, well, you know, why don’t we talk about money in the Design for AI class? I’m like, well, because these things are expensive to run. We had one of our PhD students left a little project on running over the weekend.

and came back to like a $10,000 bill in processing fees. And it’s like, oh my God, like, you know, it’s like, well, these, you know, these, there’s a real cost to developing these things. And so when you are proposing these ideas, you have to figure out like, is this gonna pay for itself in maintenance? Right now everyone’s talking about the…

that new Rabbit AI device. And they’re saying like, oh, even the heaviest users, it’ll cost us like $15 a month. It’s like, yeah, but you sold the device for $200 and said no subscription fee ever. So $15 a month, like let’s say the device only costs $50, which probably something like that. So you’ve only got like, you know.

So that’s only 10 months of working, you know, before you’re losing money on each device. How do you do it?

V (53:19.902)
Yeah, unless they add ads or some sort of other monetization mechanism, but it’s so nuanced then. You kind of go on this rabbit chase ultimately. How can you actually make it work? Have you actually seen the demo? I noticed on Twitter the other day, the actual demo of Rabbit are in the wild and the delay was…

Dan Saffer (53:25.552)

Dan Saffer (53:44.37)
Yes. The delay, yes of course.

V (53:47.726)
20 seconds or something along those lines. And that just, I was like, yes, this is, why wouldn’t I even, why is this not unexpected? Because this is exactly what we are dealing, like the actual processing power and the actual demo, what was demos of a product versus realities, likely gonna maintain the same, I guess, challenges. And then you almost lose all the value to it.

Dan Saffer (53:58.496)

V (54:15.35)
Like there is almost no value if you’re gonna have to wait on a simple invocation and you’re gonna, you know, it’s like I would ask that new question now. And then I receive an answer after like minutes of way time.

Dan Saffer (54:28.574)
Exactly. Right. It’s, it’s crazy. I mean, yeah, I mean, uh, we talk about this a lot in the design for AI class. Cause like, okay, cause students are always proposing things like, Hey, like live, live translation and these kinds of things. And it’s like, well, you know, that’s, that’s very expensive, you know, and, and it, and how, you know, the processing is going to a cloud and coming

Dan Saffer (54:58.646)
You know, and yeah, is there a delay for all this stuff? And what, you know, and how much of a delay will people tolerate before the user experiences so bad that there’s no desirability anymore? It’s like, well, that’s right. And you can’t just have these five, 10 second delays in conversation. It’s not just completely unacceptable. I mean, that was the one thing that

V (55:21.006)
It’s unacceptable.

Dan Saffer (55:27.846)
you know, Amazon did so right with Alexa where it was, you know, hey, it has to be responsive instantly. It has to, you know, it has to respond and it has to be, it has to respond quickly. Within less than, you know, I forget, I forget what the measurement was, but it was enough that it was like, Oh wow. Hey, this really works. And

V (55:51.69)
Yeah, yeah, and I would even challenge because I, you know, we owned at home here. I think we owned a couple of Alex’s, you know, one bigger one and one smaller. But even then, I think the response time was okay, but it was always a bit jolting where you kind of had to learn that you’re going to wait like half a second or a brief moment until.

And I think there was visual cues. There was a bit of, there was, it was by design, I guess, you know, made the best it could be with the technical limitations. But as a user, I guess you knew what you were getting because it was so novel, but you also had to almost like behaviorally change and be okay with that. But yeah, I’m with you. It’s such a good example of like, how could you, what could be good, what could…

Dan Saffer (56:42.207)

V (56:46.606)
good enough look like compared to what certainly doesn’t work. And not to crap on Rabbit, R1, or those devices, but it’s a long way to go to be usable.

Dan Saffer (56:57.979)

Dan Saffer (57:02.614)
But I think this is, I mean, I think this is one of the things that you, that, you know, as we’re talking about this, it’s like, well, there’s all this aesthetics and experience level things that most product managers are not really trained to think about. And so when we go back to our previous discussion about like, you know, hey, are all, you know, product managers going to be designing these things? Maybe not, you know, because they’re not trained to…

think about these kinds of things. I mean, these things do appear, but yeah, but sometimes it’s like, you know, if you can catch them early and design for them faster, you may have a better, yeah, a better time of launching your product.

V (57:54.014)
Yeah, yeah, it seems like there has to be a bit of due diligence to know how viable that is. Some products are also way too early. Like I think this Rabbit device is just technically likely going to be either very underwhelming or it needs some, I don’t know, quantum computing before it can be that.

Dan Saffer (58:17.254)
Yeah, it’s funny. I saw an I saw Tony Fidel speak, you know, the iPod guy and the Nest guy and he was saying he never he never worries about being too late to market. He always worries about being too early to market. And I really thought about that. I was like, Oh, yeah, I really took that to heart. Like that. That makes perfect sense to me. Yeah, because if you don’t have

If you don’t have the right combination of speed and processing power and those kinds of things, it just, yeah, stuff just doesn’t, it just, it has no user adoption to it or, or just a limited user adoption to it. And so knowing those, knowing those things and, and being able to judge those things are I think a huge, that’s a design, that’s design decision.

You know, we talk about look and feel, you know, that’s the feel part of look and feel. Like how does it feel to use it? I think we forget about feel a lot in look and feel. That’s kind of why I wrote the micro interactions book because the micro interactions book, I think was a lot about feel, right? Was about like, you know, how do you make things seem like their quality? Well, not seem.

V (59:21.738)
Yeah. Yeah, definitely.

V (59:33.814)
Such a great read.

Dan Saffer (59:44.414)
be actually be quality, but how do you think about things being quality, like even just a button push or, you know, I mean, the iPhone would not have worked as a sale if there was a lag on, you know, each time that you press, each time you press an icon, if there was a second lag, it would have felt super clunky.

V (01:00:11.078)
Yeah, that’s I guess how iPhone got ahead because you probably remember the touch devices from like Samsung and stuff and they’ve been underperforming on that micro interactive. But one challenge to that, I guess, or maybe not so much, please correct me if I’m wrong, but one of the quotes, again, I’m going to borrow from chapter, you wrote that AI can be successful

Dan Saffer (01:00:19.678)
Yeah, totally.

V (01:00:39.53)
As long as it is valuable and low risk for users and organizations. Um, how did you mean by that? I guess, because some, some people maybe, you know, who are going to hear this, they’re going to be like, oh yeah, but rabbit R1 is like moderate performance or low performance. Um, um, low risk as well, um, for users who adopted maybe low risk for organizations who sold it already and sold out. Um,

Dan Saffer (01:01:08.982)
Sure, I mean, I guess the idea is to find places where the AI does not have to be entirely accurate. That was the intent was there’s a lot of AI that is trying to do these very challenging things that should, that.

are currently done by experts. So things that, you know, like even driving a car, there is some level of expertise that is involved in doing that. Driving a car through city streets, there’s a big level, I mean, that we make people get driver’s licenses to do that. So there’s a lot of people chasing those kind of very hard problems when there are things that are…

that are AI based that just have low risk to them or are just moderate. Like if the AI is guessing wrong, it doesn’t cause a lot of issues to it. So a couple examples. So Spotify just launched this thing in fall called Daylist, which creates these micro genre playlists.

That’s really interesting. I really enjoy, you know, so every, so every day it like looks at like what you’ve played during that day and pulls from its database and literally makes a little sentence that says something like, you know, lazy Sunday, you know, you know, uh, heavy metal, heavy metal jam band or something like that. You know, that that’s like, Oh, this is kind of what you listen to. And we’re going to make this playlist, which

Again, moderate performance, they probably had most of the tech off the shelf and high value to me, cause it helps me discover new music and low risk to them. Like, okay, well, and low risk to me, the user like, oh, if I don’t like a song, well, I just skip it. It’s not a big deal. Another example is how Netflix, the icons for movies and movie posters,

Dan Saffer (01:03:32.702)
they, they alter, uh, to be, to be interesting to you in particular. So that kind of stuff is like, Oh, well that’s, you know, super low, super low risk and, uh, you know, and, and could be valuable to me again, cause it helps me discover things and, uh, yeah. And moderate performance. It doesn’t need to. It’s it, it’s not like, yeah, it’s not trying to do a moonshot. It’s just trying to create covers.

out of all the pieces of things that it currently has and just assembles them in a way that will be pleasing for me, that’s personalized for me. So things like that is what the book is really trying to find those opportunities rather than, hey, let’s use AI to do medical imaging or, hey, let’s replace your doctor with AI. Those things…

V (01:04:29.663)

Dan Saffer (01:04:31.498)
Those things are, I don’t want to say that they’re impossible, but they’re extremely challenging. And so you’re leaving a lot of AI value on the table by doing that. There’s a lot of things that AI can be helping with, you know, tab clustering that don’t rely on these very complex systems that are…

V (01:04:45.564)

Dan Saffer (01:05:01.114)
Risky to both users and the business. You know, Hey, you know, how many, how many bad, uh, AI projects, you know, find, find themselves on the front page of the New York Times or something like, you know, just, just bad PR. And you’re like, Oh my God, how did, how did, how did this get out the door like so embarrassing.

V (01:05:24.158)
Yeah, I think it’s also linked to, I guess, deeply experimental nature of it too, because a lot of it, to me at least, when it comes to AI projects right now, they are all experiments and they might become these bigger initiatives or projects, but they all start, I guess, it’s still very early. We’re just scratching the surface, I think, and especially from UX perspective as well. And I think…

Dan Saffer (01:05:30.88)

V (01:05:53.598)
with UX, we’re also like trying to chase it too and, you know, and almost not get left behind. But yeah.

Dan Saffer (01:06:03.718)
I mean, definitely when we talk about generative AI, I mean, that is extremely new ground, right? That is like, we are definitely still experimenting and learning what it’s good for. We, I don’t think we exactly know all the things that it’s good for now. We know it can do some things really well.

But what it will eventually be good for is, is going to be really interesting, particularly as we start to apply more UX to the, you know, to the wrappers around these things where, I mean, right now they’re like, you know, type in, you know, type in a prompt, push the button and, you know, pray that in, you know, 20 or 30 seconds, something comes back that, that seems sort of like what I’m interested in, you know.

V (01:06:57.003)

Dan Saffer (01:06:58.878)
You know, it’s like really giving, you know, your intern a job and having them go do, you know, go do something and then have it and come back the next day and be like, here it is. It’s like, no, this isn’t what I wanted. Can you try this again? Okay. I’ll see you tomorrow.

V (01:07:14.494)
Yeah, and surprisingly, the things are also getting progressively worse, at least from my experience using chat GPT. But maybe that’s my observation. But I feel like it’s been quite recorded that it’s not at the same level as it used to be. And it’s likely because of, again, viability, the cost of running it, you know, all those factors where you’re I believe that I’m getting the light version.

of the models using chat GPT today as compared to what they used to get. But anyways, it has been such a great chat. I think one last thing, if I could just get a couple of minutes of extra of your time. I’m going to get back to where we started and it’s the UX disciplines framework. I wonder if you would change it anyhow today.

Dan Saffer (01:07:45.266)
Oh, yeah.

Dan Saffer (01:07:58.804)

V (01:08:18.917)
what would you do to further enhance it given the times or where the industry is?

Dan Saffer (01:08:26.618)
I think one of the things that I left off one of the original, the original drawing, I think that I did. And I mean, hilariously, you know, I did this drawing, you know, probably in 10 or 15 minutes. And yeah, it’s going to be on my tombstone or in my obituary or something like that. That drawing follows me forever. Um, but I did, I left out content and I really think that was, that was probably a mistake. Like.

I mean, some of the stuff is, some products are nothing without the content that’s in them. And so do we start to, do we start to design, how much are we involved in the designing of the actual content that gets placed in?

in the things, in the things, in the, the rappers that we make for these things. Um, and so, and, you know, if, if you’re, you know, uh, a magazine or, or that, that kind of designer, you know, you’re like, well, of course you get involved in, in that. Um, but it’s interesting, you know, having worked at places like Twitter, where it’s like, you have to be very, uh, agnostic about what goes in there. Cause it could be.

you know, it could be a president threatening nuclear war or it could be a funny cat video, you know, like, so I think the content piece is missing. I think some of the social and psychology pieces are probably missing. I think, you know, I think there are things like,

network effects and stuff like that, the definite and group dynamics and those kinds of things that I think have definitely been left out of the diagram that I think could probably be, could probably also definitely fit into their like cultural studies. Those kinds of considerations, cultural studies, sociology, those, those things I think are definitely in there.

Dan Saffer (01:10:43.434)
And I had one more. Sociology and groups.

What was the other one? I’m blanking now.

Dan Saffer (01:10:59.242)
This is the part that you’ll cut out of the… This is the… Yeah.

V (01:11:01.27)
That’s fine. Don’t worry.

Dan Saffer (01:11:09.87)
Oh, sociology, psychology, content.

Dan Saffer (01:11:21.002)
Oh, I can’t remember. So those are at least the pieces that I would… Well, I mean, maybe the other one is economics. I mean, getting back the business model, I think as we’ve seen in the last 10 years, affects the user experience. If it’s a paid subscription versus an advertising-based…

V (01:11:35.598)

Dan Saffer (01:11:51.17)
service, it’s totally different. The experience of it is very different. Um, although it’s funny now that, you know, the, even things that are paid subscriptions now, I feel like I get advertising in those too, because again, everyone’s trying to squeeze the last bit of value out of, out of their users or they, they did the pricing wrong and we’re like, Oh, well, whoops. Now we need to.

The streaming service cost us more than we thought, you know, shocking. Okay. We, we can’t raise prices again for the third time this year. Can we just, let’s just add ads in there. Okay. Great. Um, so I think maybe stuff like that is, uh, economics and business models. My

V (01:12:38.626)
Yeah, it does make sense. I think it also telling of UX industry hiring as well, because you get job ads in UX, which specifically state, let’s say SAS experience, or, you know, a product PLG or sales, like a lot of the different keywords, which basically make a very different experience, like depending on like, what do you want to achieve? So I definitely

Dan Saffer (01:12:54.806)

V (01:13:07.806)
It resonates a lot. But Dan, it’s been such a pleasure. And I’m really appreciative of your time. Where could people find more about you? Where could we send them?

Dan Saffer (01:13:21.242)
I am I’m no longer on Twitter. I’m on threads now so I’m Dan at Dan Saffer at on threads I have a permanent website Oh Danny boy comm is always there and Yeah at Carnegie Mellon I teach at the HCI I the Human-Computer Interaction Institute and so you can find me at CMU

V (01:13:51.231)
Awesome. Thank you, Dan.

Dan Saffer (01:13:53.128)
This has been great. This has been fun. So my.

V (01:13:55.338)
Yeah, likewise. Nice.

Dan Saffer (01:13:57.96)
All right.


Experience designed newsletter

Twice a month I send out 5 new things you should know about: strategy, design and tech innovation, and always at least one out of the box thing. Additionally, a roundup of featured vaexperience content you might have missed.

Free, no catch, unsubscribe anytime.