fbpx Brand Name

Ep16. The Future of UX Research and the Importance of Evaluative Research Today with Dr. Nick Fine

Categories
Blog Career development Design leadership & strategy featured-podcast Podcast

Dr. Nick Fine⁠ is user research leader, known for advocating standards within the broader UX design community. He is also a recurring guest on our podcast. In this follow-up session, we continue exploring the theme of ‘Better at UX Research than AI’ and delve into the future of user research practices. We discuss how the UX industry and its user-centric approach will evolve over the next five years, emphasizing the significance of evaluative research in today’s landscape, among other topics.

Listen to the episode

Also available on all major podcast platforms and Youtube.

Transcript

Vy: [00:00:00] Hey, welcome back. This is Experience Design Podcast. And today I have a special guest, Dr. Nick Fine. We continue where we left off in the previous episode done a few months ago on AI and user research and UX industry in general. But in this session specifically, we’re going to talk about what user research is going to look like.

In the next five years, we’re going to talk about the importance of focusing on evaluative research right now and the opportunities Nick has discovered. Meanwhile, and a lot of other themes. If you’re in UX, in UX research, this is a gold mine. And if you’re into UX and user research, this is a gold mine of an episode.

So I hope you enjoy it. If so, Recommended to a friend because that certainly helps and on that note. Here’s dr. Nick fine It’s been a few months since we last spoke and you know a lot has changed but also not so much at least in my view But I [00:01:00] want to hear your thoughts because You had some strong takes, I guess last time we spoke about AI, generative tech, like what’s on your mind today?

What keeps you awake? 

Nick: Yeah. Um, it’s the stuff that I’ve been posting on LinkedIn in the past week is really kind of keeping me up at night. Like the signal, not signal, the sense that I’m getting my intuition, whatever you call that, instinct, I suppose, is that evaluative is, is all the value right now. Uh, and it will be for quite some time.

Like think all this generative work. It’s not generating the value that businesses need. It’s generating some value, but the real needle moving value is an evaluative. And I’ve always known this. And I think any old school UXer kind of knows this. Let me just preface all of this for a second. I don’t know what your experience is, but my perspective from back in the day, as the old school UX consultant role, we never did discovery as a distinct upfront piece of work.

There was, yes, there was upfront research and kind of [00:02:00] contextual research and background research, you know, compared to the research, but they were small, fill you in things. They weren’t big, chunky, defined pieces. Any of that discovery or generative level work. Insight usually came through the lens of an evaluative session.

If you know what I mean, came out through that activity again, if you work in British government anywhere, discovery is a distinct phase. So I’m not including that. Right. So what I’m saying is, is that just discovery became its own thing, its own beast, if you know what I mean, really big, and it That’s what we see today in 2024, in April, is we’re seeing market research UX.

If you know what I mean, it’s, it’s surveys, interviews, very generative, very little in the way of evaluative. So now let’s get back to the AI bit. I’m really firmly believe that there will be services created over the next coming years that are generative researchers in a box, in a bot. Right. It isn’t a great leap of imagination to [00:03:00] see a bot recruiting through an API, you know, automatically emailing people that have signed up to testing time or userinterviews.

com or one of those recruiting from that paid service. Um, sending them a Calendly like invite, you know, they self book in, then you run, it runs and scripts, uh, its own unmoderated session. You know, it, then it will apply that script or that, um, survey or run that in TV. It will then analyze the data. It will write it up.

It will distribute it. And it can do this all the time. So I’ve just got a generative dashboard that I can look at to see what’s the state of play. Not a distinct phase, right? That’s all very doable. So human beings right now are trying to define user research, like something that your AI can very easily.

Mimic, copy, replace. So that’s a, that’s been a big red flag and that’s why I’m kind of shouting to everybody right now. Like if you don’t get your evaluative chops up, [00:04:00] all you’ve got is something that is discovery based and that the AI can do probably better and faster and cheaper than you. 

Vy: Yeah, but you see, it’s interesting point, like to me, that discovery chunk, or I guess a lot of the tech houses, even product managers or teams, consultants have been productizing the discovery itself.

Like I think I have no idea what the number is, but, but a lot of the market and a lot of the different industries. Use discovery to make a living like that’s consultancy one on one because chances are they, you know, they go for some research, some discovery, some definition, rough scheming, ideations, things of that nature, and then they leave it, you know, like that evaluative thing was probably more reserved for in house teams or if, if it’s lucky consultants who are continuously looking at it.

Right. So I feel like there is maybe two driving forces here and one that’s And the big force is really in that generative [00:05:00] research, um, phase or that bucket to me, at least. I feel like that’s where we, we probably dividing in, in the market, our, um, ideas and, and our time between the two, you know, and spreading, which, which is interesting.

But you were saying, I guess, that evaluative is, 

Nick: Much more valuable, contributes much more genuine value to the project or the product to make it better, to make more money. It’s quite that simple. Uh, discovery stuff. Yes. It’s got some value and you definitely don’t want to start building without understanding who your audience are or their needs are, that’s for sure.

But there’s no reason why you can’t be doing that, uh, in Sprint. You know what I mean? Rather than having distinct upfront pre sprints or sprint minus or sprint one, two, three, you know what I’m saying? That’s very waterful. Very, very waterful. Also, it’s really important to mention for your audience that we’ve been here repeatedly before.

Repeatedly. We’ve been here where management consultants or UX consultants or anybody says, I want to do a big [00:06:00] chunk of upfront research because it’s really billable, good, easy money. And it used to get cut and it’s now getting cut again. By fattening up your invoice, you’re ruining the industry. You’re ruining, ruining things, right?

There’s grabbing too much. It’s too greedy. Whatever you want to call that. It’s not appropriate. It’s, and if you’ve done any consultative selling, that’s not what your client needs. So stop giving them stuff they don’t need it. So yeah, you’re right. It has become productized. It has become its own industry, but in order to make research valuable, okay, or I want to say make research great again, but that sounds awful.

In order to make research a valuable and necessary thing. That value has, will come from evaluative or from behavioral work. Now I know I’m wildly biased. I’m a behavioral psychologist kind of thing, you know, a behavioral scientist. Um, but behavior in a world of AI. It’s a really difficult thing, really a difficult thing.

That’s where human intelligence beats AI every [00:07:00] time. So that’s the real value. So we know that evaluative builds better products, or at least, Oh, I’m losing my camera. Sorry. We know evaluative builds, um, better. products by providing constant course correction. Sorry, my camera’s a nightmare. 

Vy: But how do you, how do you imagine that, uh, looking like, I guess, because when you describe a validative to me, it’s like, yeah, it makes sense.

We have, we have a lot of scores. We have pure score, SAS score. We have a lot of metrics, which. We can, you know, tag into the behavior and use AI to do it better, like someone can crunch it. But what’s your thinking? Like, how would that actually look like if it’s done right? 

Nick: AI can’t, can do behavioral to a very low level or sorry, a very kind of high basic level.

It doesn’t understand the nuances, the subtleties of just human being human. Not at all. Um, AI for 20 years has been able to identify the number of [00:08:00] people on a platform or, you know, those sorts of problems or footfall in a, in a supermarket is probably stuff that you are well aware of. But when it comes to user research, being able to train a bot, uh, to identify all of those, all the nonverbal communication, to have a level of empathy, to be able to pick up on things that are not said.

It won’t be able to do that in for many, many years, many years, many, a long time, or long enough that our careers are going to be safe. So I know I’m not probably answering the right, answering what you probably asked me, but that’s really critically important. Research isn’t about what people comes out of people’s mouths every time.

Okay. Sometimes it is, sometimes that’s necessary, but there are other times when that’s not the story. That’s not where the insight is. Now, when you’re asking somebody to evaluate. A product or a service that you’re building or have built, what comes out of their mouths doesn’t is only part of the story.

It doesn’t give you [00:09:00] the picture, what they’re doing, how they’re interacting with that tells you so much about the experience that they’re having in their head, the real UX, if you know what I mean. 

Vy: Yeah. 

Nick: So we have to, because I can’t plug into your head. I can’t, I’m not a cognitivist. I can’t plug in and have a look at the black boxes.

I have to look at your behavior to see what’s going on in your head. Okay. The bots can’t do that. We have to do that. So to know if a design is successful and to not rely on what people are saying, you need a human being to look and observe and to interpret. 

Vy: Coming to my, my original question, I guess that would still require you to have some sort of defined journeys existing.

Launched product flows. Like it’s really hard to evaluate something if there is nothing that’s where I think you kind of transition between generative and evaluative and you kind of need to pick your battles in a way, right? Like it’s aptly probably more appropriate to use AI for something you already have and [00:10:00] maybe invest there to develop the tools if you are a developer.

Nick: It’s everything is so depends, you know what I mean? It’s so contextual, so applied. When I’m doing evaluative work, I don’t know if this is a way of answering that question, but when I’m doing evaluative, I prefer it to be minimal script. Right? Very open ended, not defined journeys, as you’re saying, right?

What I mean to say is, is I want that kind of first use experience as naturally as I can. Because I or you or anybody isn’t going to be there when they’re using it for the first time. Right? Or when they’re using a function or an element or page for the first time. So I want to see that immediate reaction.

I want to see the immediate behavior. Like the natural raw behavior, because that’s the truth. If you know what I mean, that’s the insight being able to pick up on that insight and report it back to the team is how you build better product. It’s not difficult. It’s not rocket science. But actually it does get quite nuanced.

If you look at a really junior researcher in an [00:11:00] evaluative session, there’s a ton of stuff that they’ll miss. And when you’re reviewing a video in a three minute segment, an experienced researcher can probably pick out 20 different things. Whereas an inexperienced one might just pick out one or two really headline things like, like they’d be distracted by what the person said, instead of watching where the cursor went or where they clicked or whatever was actually happening on the screen.

Right? Because what’s happening on the screen and what’s happening in their head or what comes out of their mouth are not usually the same things. 

Vy: But would you then in that case, as a, I guess, researcher using AI for evaluative research, would you use that as a assistant to begin with? Like that to me seems like the immediate feature where, um, I don’t know who I spoke to before now, but someone mentioned similar sentiment, I guess, if, if I’m reading correctly, but they said.

If I were to use AI tools, I would position them like a junior UX person, and someone would still need to make that critical judgment, 

Nick: [00:12:00] you know, at the 

Vy: forefront. 

Nick: And that’s wholly true of all AI, right? You gotta have a human being at the top of the decision chain. Otherwise It’s not Skynet, but the tail, you know, the AI tail is wagging the dog and that’s completely insane.

I think that in my mind, the question is how far up the chain does that go? I consider AI to be my research assistant when I’m trying to use it for analysis work, right? It’s, it’s, it’s, it’s like having a junior or an intern who needs the experience and they, you know, they’re going to do much of the heavy lifting.

At the moment, I suppose I should say, since we last spoke, most of my prompts don’t work. I’m being completely honest with you. Most of my prompts don’t work. I am unable to use AI and I’m spaying specifically GPT plus and Claude and Gemini 2. I, I’m, I, I pay for those two services and I had canceled Claude in disgust a couple of weeks ago.

And I’ve just reactivated it to see if anything has changed, [00:13:00] but my prompt library is not working. Not reliably I’m spending more time double checking and triple checking than I am doing the work. And that means it’s a false economy. It means I don’t have the time to spend more time doing the analysis.

I just don’t have the time. So I can’t use it. That’s not what everybody’s going to be telling you in the world. Lots of people want to sell you stuff. It’s kind of annoying me and really bugging me that I happen to be truthful about this. There are people out there who are not user researchers promoting how to use AI for user research.

Vy: It’s still, I think, to me, you know, those tools are not really made. To answer user researchers need, it’s like a meta point almost like it’s not cognizant of how you would do that or like, how do you connect the dots or how, how, why do you spread your remit to research? But the number of thing is that even the tools which are made for researchers, you know, I could name a few, let’s say dovetail [00:14:00] would be one for a repository or management.

Generally, um, the AI tools. features to me, and this is my N equals one perspective as a user, they’re peppered and they’re additive. They’re not really redefining of how you would actually automate your process with a researcher. And I’m not, you know, again, I’m going to give them slack because it’s a race of.

Applying generative AI to every single app, like every tool is trying to get ahead and just experiment, experiment, experiment. And that’s why it’s so underwhelming in a way of like how much you get and how much you don’t get. 

Nick: Can I pick you up on and say something controversial, which I know you think probably is a good idea.

This experimentation that product manager is doing isn’t experimentation. Okay. Let’s just call it straight out. What it is, is, is it’s guessing at different ideas. And that’s a wholly different thing. That is not experimentation. There’s a lack of structure and it’s the lack of structure, which is, is the problem.

Um, it’s kind of knee jerk. Let’s try this. Let’s try this. Let’s try this. [00:15:00] I mean, without structure, it’s just guesswork. I actually said that in my, in my science of UX, um, conference talk in 2018, 2019. Um, without that structure, it’s pointless. And so experimentation has to have structure in order to be, in order to learn.

And there’s lots of product managers kind of, it’s just not learning, frankly. Why, why 

Vy: is that? Why do you think that is? 

Nick: Uh, I think cause they’re told to experiment. I think that’s, that’s the, the, the, the language. And I’m not sure if it’s a Torres or what’s the other big product guy, Marty Kagan, you know, I don’t know if it’s those guys, but every, you know, experimentation seems to be fundamental to product management these days, but yet we haven’t given them much in the way of science or some kind of structure to that experimentation.

Now I do have to give Teresa Torres huge credit because I’m a big fan of her opportunity solution tree. As a model, I think it’s brilliant. Uh, I don’t think there’s anything else like it [00:16:00] in the world. I don’t think it’s perfect. I think it has got, there’s some, some UX issues with it, should we say, but then they’re not insurmountable, they’re iteratable, they’re fixable.

I think, oh, I don’t think Teresa, I could be wrong. Okay. So Teresa, if you watch this, I’m hugely sorry. I think Teresa needs to make the explicit connection between the OST, the opportunity solution tree and experimentation. I don’t think they’re explicitly connected. I think the OST is there for kind of lateral thinking, considering alternative routes or alternative potential solutions, which is great.

That needs to be the kind of the rubric or the map or what it, you know, the guide for the experimentation so that there is structure there so that you are having a logical progression. Cause there’s none and it’s crazy. It’s stupid. And I shouldn’t be the one calling this out. It should be the product people calling it out.

Vy: You know, it’s much easier to. to cover and comment on that from a side too, because, you know, in, to my community, I think I shared, and this has been already [00:17:00] months of that, but there is a big boom of technical AI roles or sorry, technical product management roles, which are focused on AI. So if you open, if you were to look for whatever reason, you would to look senior product manager, every other opening, at least in UK or Europe is, uh, the, uh, suffix AI.

Like everything is suffix, it’s, I guess the demand, because the businesses are like thinking maybe FOMOing or feeling that like they are going to left behind for whatever reason, but they’re capturing a lot of the people who are might not be prepared. And I don’t know, it’s almost like it’s maybe not for us to solve product management challenges, but I feel like researchers, engineers, and designers going to get the shorter end of the stick because they’re not the ones who are going to be empowered.

You know, you can hire as many PMs as you want, but. Their team is going to have to pick up the slack, so to speak. 

Nick: My worry is that if you have a technical AI product manager, there’s too much waiting on the technical AI bit and not enough [00:18:00] on the product management bit. And so that’s when people get stuck in the feature factory loop of we’ve got a new technology, where do I apply it, which is the wrong way around.

Everybody, the whole world is doing this right now is going well, in part, because they’re forced to because AI is here and you want to look for use cases and applications. I get that. But from a UX perspective, you should be looking at what needs can I fulfill with AI as a solution kind of thing. 

Vy: Yeah.

Nick: That’s the better way of thinking. 

Vy: I am with you. I think, can I bring up one question from one of the community members? And that’s actually a follow up to our previous podcast, which generated quite a few inputs. But one of the questions was, and I’m going to quickly paraphrase it. Uh, but it was to do with AI usage for user research ultimately, because a lot of the PMs I’ve seen quite a bit.

Myself, quite a few PMs post of how we research using chat GPT or how even designers generate personas. Um, but this person is saying my concern is [00:19:00] that less valuable insights. Gathered from UX research conducted by AI doesn’t necessarily mean less profit. 

Nick: Whoa. And again, uh, AI generated stuff doesn’t necessarily mean less profit.

Vy: Yes. And let me follow up with that. Profit is what PMs and business execs values most. We could argue forever, but products relying on the real rich human analyze user research, help companies succeed much better in the long term and establish more sustainable CX relationships. But, but no immediate profit inside still makes our efforts worthless for most organizations, but they’re not actually trying to help.

people as their primary goal. It’s, it’s long winded question, but I think that the sentiment is that people just don’t know. Businesses just don’t know. They don’t realize. They see more insights, more data. They think this means profit, I guess. 

Nick: Yeah. Okay. So how can we counter that? Because with the.

greatest of respect to the question asker, [00:20:00] not all information is valid. That’s a big, big, big thing to say. If you treat all information as some kind of equally valid, then yeah, I can see how, you know, it’s all about making money. But the world isn’t that simple, unfortunately. Making money Often comes down to valid information or valid insight, right?

What we’re seeing in the world today is an awful lot of wastage and building the wrong thing because you’re building it on top down or not bottom up, or just not on insight, or they’re just bad ideas, you know, or invalid information. That’s the problem. And so you’re not making the money and people getting fired, all kinds of stuff is happening.

So I understand, I completely agree with the question asker about the need to make money, the need to wash our faces, the need to generate value. It’s never been stronger and that’s irrefutable. Okay. If we’re not generating money for our companies in a measurable discernible way, then we’re a cost [00:21:00] and people get optimized out as costs.

We’re people when we’re researchers. We are people who are contributing insight to the world that makes money. If that insight contribution isn’t making money, then you’re just describing the world. You know what I’m saying? And there’s no value in that. And so this is why I’m kind of screaming from the roofs right now is generate value.

And I want to say value. I don’t mean something wishy washy. I mean, make money. you know, or reduce costs or something that moves the financial dial. 

Vy: But does that mean, I guess, focusing, maybe hedging the goals in a way for what researchers do? Because I think, and I’m reading into this comment by way, and again, this is a great question too, because it describes so many issues and like opportunities.

But what I’m reading is almost a bit of a frustration of, Maybe lack of agency or, you know, autonomy that maybe researchers are a bit too boxed [00:22:00] in to do the research, which is not really immediately adding to bottom line. And then if I’m a business owner, why would I care? Cause I can talk to chat GPT and it tells me immediately.

Nick: Yeah. There’s too many researchers who get bossed around, who are treated like minions and order takers. Now the update, I suppose, is. That’s what you get when you employ cheap and inexperienced, more junior people, right? When you employ a more senior, more experienced person, they will respectfully push back and say, that’s not right.

If you don’t know any different, you do as you’re told by your product manager or whatever. But if the product manager doesn’t know what they’re doing, which frankly is fairly common, then you end up in this kind of don’t know what’s going on circle. You need somebody to say, actually, this is what we need to do.

And usually that comes from a senior or an experienced practitioner that helps advise the product manager and say, in order for you to get the outcome that you want, I need to do this activity. But instead what [00:23:00] happens is the product manager says, go and do this activity. The minion goes away, like a good minion, does the activity, comes back and gives the product manager what they wanted.

But it doesn’t get the outcome needed because it was the product manager doing the thinking instead of the researcher. 

Vy: Yeah. 

Nick: So that’s what, you know, when I’m, one of the things about researchers, especially seniors, is we’re not the most popular folk in the room because we’re often saying things that people don’t want to hear, we’re reporting insight back, or we’re saying, you know, I found a really big bug in the code that you wrote, Mr.

Developer, or that’s a really, this design with the call to action couldn’t be seen, people didn’t understand that function. I’m doing that right now. Telling a designer straight up that design sucked, unfortunately, or that iteration didn’t work. Fine. You know? Or it’s a higher level thing. You guys are building for an audience that doesn’t exist, or the purpose of your product is wrong.

These are big, important questions. I think only a researcher can do, can, can challenge back. You know what I mean? To, to, to [00:24:00] inform. 

Vy: But to me, it sounds like it’s also a certain seniority or a certain kind of researcher too, because, um, I’ve personally witnessed People who would take orders, usually at the more junior end of the spectrum, because more senior, you know, people will get, we at least get more political of how to bring the message up, hopefully, um, but to me, the market is also kind of dictating a lot of that.

And the signals I’ve seen is there is almost very little room for juniors. Cause the, there is no free cash anymore. Like the graduate programs are quite dried up only the big tech and at the small numbers too. Like I was looking at graduate programs at Amazon and Google for UX specifically, and that’s everything UX, not just research.

And my perception was that there is less openings. Um, the, the big five, let’s say in consulting also wrapping up on, on all those things, um, their layoffs and things of that nature. So the, the entry level. is quite dry. I think the mid is in a good state. The [00:25:00] seniors are in a good state. The managers, the directors, the heads of research.

I think that’s another thing where I’m, I’m not seeing enough. And I think that’s, that’s where probably you end up with this middle where people are going to be mix and match. They have their experiences, but we might not be spearhead it. 

Nick: People like you and me and just. True seniors with true heritage should be leading research functions across the globe.

Fact, but most of us aren’t, we’re not allowed to lead, we’re not even given the opportunity to lead, people don’t want us anywhere near that. Uh, people have been actively keeping us away. So what we’re saying is, is The senior tier, the mid, the midweight tier is it’s there, but it’s not called the midweight tier anymore.

It’s probably called like a junior senior, right? Or an early senior, because everyone’s a senior. And that means if you’re a senior, but you’re not really a senior, you’re not able to get actual senior support from anywhere and you can’t get it from your leadership because the [00:26:00] leadership haven’t got any significant practitioner experience.

So it’s a really big mess. What it really needs is a genuine senior at the top table to be able to push the evaluative agenda, to be able to provide the sleeves, rolled up support, to role model, to set the strategy, to, to write the ship. Because right now we’re lost in a world of surveys and interview data, which is a mess and no one’s learning.

That’s the important part. No one’s learning. Uh, and so all these tools are going to come into play. We’re all going to, you know, it’s, it’s a really bad trajectory and this is why to bring it kind of back to the beginning again, evaluative and the value it produces and in defending against AI is, is, is undeniable if people don’t get hold of it now, you won’t have a career in five years time, you just won’t.

So, um, 

Vy: But do you foresee if we dig into that five years with evaluative research and [00:27:00] AI, like in those five years, how do you feel like that’s going to look like, like, why, why wouldn’t, let’s say if I’m just starting off and I’m thinking research is, is it, what could I expect in those five years? 

Nick: Right.

Okay. There’s so many angles to this. So I’m not even going to remember all of them, but a bunch just sprung to mind. For one, let’s just take context. Okay. There’s a big question as to whether the work context stays home, becomes more office or more hybrid. There’s all kinds of discussions. Personally, my opinion based on nothing, there’ll be more office based work coming, more hybrid, more pods, more concession, you know what I mean?

I see that too. I think, I think we’re too far away. Okay. I don’t, we’re not going to go back to where we were, but there’s still a compromise position somewhere in the future. In that world of more physically co located work, what you’re going to send R2D2 down there to, or some kind of Android or bot robot, or you’re going to start taking a [00:28:00] IOT data from, um, from, from camp, from CCTV and everything else.

Forget all of that, just messy, legal, privacy, ethical nightmare. Okay. Um, having a human being is just a ton cheaper and better, frankly. So having contextual inquiry and ethnography back in the toolbox, forget getting an AI bot to do that, an AI bot doing those, it’s not happening, at least not in our lifetimes or at least in our working lifetimes.

Right. So that’s real. So that’s context. That’s number one, that will save the day. And I think we can all agree. It’s going to change from where we’re at. So that’s an improvement in the human direction straight up to begin with. What else? Okay. So market research. This market research style needs to kind of get back in its box.

It’s it’s grown legs and a tail and three heads. And it’s, it’s, it’s got value. I’m not, I’m not killing it. It’s just overblown value. And we’ve been boiling the ocean and doing all this overworking stuff. In five years time, I think we. [00:29:00] Human beings are much more efficient in the way that they’re doing their analysis with the tools that we’ve got.

So the LLMs that are really screwed up right now, get largely fixed, like all the bigotry and racism and all the other stuff that’s kind of cooked into it, gets cooked out. There’s still some bias. There’s still some hallucination and emissions, but they’re manageable and, and, and it becomes tooling. So at which point the evaluative researcher using.

Generative tooling is what the face of, I think the researcher looks like, but, and there’s one massive part here, in five years time, misinformation is five years worse. Right. And it is, and it is a, um, it’s getting worse quickly. Uh, it’s getting worse really fast. I think in five years time, misinformation either gets to a crisis point or it doesn’t, it just becomes so bad.

But in which case we’ve got to 

Vy: have, do you mean, do you, do you mean by misinformation, um, Like the amount of bad [00:30:00] signals or, or the misinformation as in, you know, what, what typically people would see in the news, which, excuse me, like which, which angle of that do you mean? 

Nick: Okay. I don’t mean like fake news, but I do mean invalid information, right.

Or, or, you know, wrongly held beliefs is maybe, you know, where there is a factually correct answer or something that, um, misinformation is on such a trajectory. Okay. Okay. That either there’s a big crisis and we, suddenly the world becomes very scientific, which I don’t think that’s likely to happen, but there will be some kind of, we will notice the effect that misinformation has on the ability to generate money.

Right. Cause right now misinformation is generating money, but there’s going to come a point, a tipping point when too much misinformation makes it really hard to use it, to make it, it becomes so polluted. People are so confused, don’t know what’s going on that it’s a real problem. Um, I think at that point, the value of the human being is in.[00:31:00] 

It’s very much in kind of the misinformation lie detector or kind of signal. I think that’s, 

Vy: that’s already happening just to add to that. And if it’s just me, but my observation and they’ve talked to as well to, to a few strategists as well in design and product. And I think this information and I like that definition you put up is already present in a way, because if you see how many businesses have failed creating chat, GPT wrappers, or became irrelevant from copywriting to video production to just another way to generate a picture.

Like a lot of them. And I’m sure listeners can pinpoint at least one tool, but the same applies in hardware, which is 10 times more harder for researchers and designers and technicians, you know, engineers, basically. Um, we have the AI pen, the humane AI pen, which I’m not sure if you looked into it, but it’s, it’s, there are so many issues of that because it’s just another thing which [00:32:00] has very limited battery life.

It has very odd gestures, which is going to give carpal tunnel syndrome to anyone using it. It has gestures, which are super hard to learn. Just so many UX factors are broken there. Like that’s where it’s definitely built on misinformation or another device like Rabbit R1, which is a hardware device meant to be a new iPhone, and it uses LLMs.

basically with no interface to fulfill the tasks you want it to fulfill. Like you would ask a person to call your friend and then it takes two minutes of crunching of cloud syncing to respond. And this is where to me, it’s like, you know, I’m, I’m literally just skipping for it and there’s just a ton more issues in that, but all of those where.

originated with someone’s gut feel, with misinformation, with unproven or, or very limited hypothesis to me. So that’s already happening. The, [00:33:00] the reckoning, so to speak to me, it’s starting and the benefits, the money, the bottom line to me is just going to pick up. So we’re going to be in like this graveyard of debt products.

And it’s obviously piling up because we see money is, you know, it’s, it’s being funneled into, into AI and generative LLMs and everything in between. But it doesn’t stack up. That’s, that’s where I just wanted to add, but I think it’s already, it’s a start. I think that five years is going to look like. 

Nick: Yeah.

But I think that’s why the role of the human being, the human UX or the human researcher is so critically important in a world of AI. It’s more, we’re more important in a world of AI. Because of that kind of shepherding of information or misinformation, you know, in the wider world, in, in our planet earth.

And I’m going to speak as a kind of a biased PhD at this point, we have dedicated PhD type roles for understanding the world, you know, beyond the reasonable doubt, or, you know, I mean, with high degrees [00:34:00] of confidence and that kind of stuff. Um, because Learning about stuff is really important. You want to do it with a degree of confidence, you know, um, how important that knowledge is depends upon how hardcore a researcher you put on it, right?

If we’re finding a cure for a terminal illness or putting somebody on the moon, then you need a really hardcore researcher. But if we’re just kind of building a shopping app, you know, you need a good researcher, but it’s not the same level of, of rigor, right? That you would do when you’re putting a man on the, or a person on the moon.

Right. I get that. We’ve got nothing currently, like no consideration of valid, uh, validity in our world. Now it shouldn’t just be the domain of the scientist or the PhD, or even the user researcher, a new exit. And this is what my talk in, in Prague is all about. Um, the future role of the user researcher in an AI world is in large part, sorting the signal from the noise, right?

Is being that scienti, scientific.[00:35:00] 

Critical thinker, right? AI is going to generate a shed load of misinformation or questionable information. Who’s checking that you can get more bots to check it. I get it. But then the problem just becomes ultimately very meta, right? Some point, like we talked about earlier, you need a human being as the check, as the stop, you know, as the, the human checking point.

So that’s what I think moving forward, the role of the user researcher in an AI world over five years is, is some kind of validity referee, which is what we call a scientist. 

Vy: But how can we get there? Um, you know, I’m going to reference our chat with Debbie, um, Levitt, uh, from Delta Sea. I think in our chat and in our podcast chat, she, um, specifically said, I think she’s, she’s quite against, and maybe I’m simplifying it, but she’s quite against democratization of.

Research of UX of even design, but [00:36:00] that’s happening. I’ve experienced that firsthand myself in, in, in businesses where you, sometimes you feel even almost like, uh, you know, like a black sheep where everybody’s like, yeah, yeah, we, we just, we can do, all of us can do research. All of us can design, you know, we just need the right equipment, education, um, tools, everything in between.

But I feel like. That’s one of the keywords and that comes from product management, but also comes from research leaders. Which to me is, it’s so dichotomous from what you are talking about, you know, and, and I wonder what your thoughts about that are. 

Nick: The reason why, like I said, we have trained researchers in the world because like the quality of information changes and it can be really important.

What we’re doing currently is allowing anybody to do research because we don’t really know or have any appreciation for the quality of insight. Right? And that means it’s the difference between. Lowercase r research and big research with a capital R, right? You [00:37:00] and I need to go buy a new car. We’ll do some lowercase r research to see what my options are, right?

That’s very different to me finding, um, finding a cure for a particular illness or, or, or solving a particular problem. Um, where am I going with all this? Um, 

Vy: But how do you democratize? Sorry, de democratize? 

Nick: Yeah, I de democratize. Okay, so we’re gonna What 

Vy: would we do? 

Nick: I firstly, I totally agree with Debbie, right?

However, I’m also a bit more, Debbie’s, I would say, she’s spot on, but she, I think she’s being a bit too Idealistic, frankly, I’m more of a pragmatist and I see the world on fire in the same way that she does. So, you know, not knocking her at all. Democratization shows so little understanding or value of, of insightful research, i.

e. They don’t understand. It’s like giving some, I don’t know, my 11 year old son, the controls to my [00:38:00] multi million pound product. Do you see what I’m saying? Because bad insight will take you into the wrong, wrong direction and you won’t make the money that you want. You won’t get the return on investment that you want, but yet we’re not safeguarding against the quality of the insight.

So it doesn’t make any sense. Now, what we can’t do is get all elitist and be like, you’ve all got to have PhDs or be behavioral, you know, experimental psychologists, but what we can’t have is this have a go research that anybody can do it because, and this is the really important part, I can have a go at engineering or painting or design or whatever, and if it’s good or bad, I’ve got tangible concrete thing that I can show you.

With research, it isn’t tangible. It’s very abstract and you don’t know if you’ve got it right or wrong. Most times, pretty much 99 percent of the time, if you do some shocking research, you don’t know that it’s bad and you don’t know the insight is, is, it’s got no validity, and then you go ahead and start informing the team and they start making decisions and building things.

The fact that we have untrained [00:39:00] researchers doing, call themselves seniors and doing what they do should be a sackable offense because of the amount of money and value that’s involved. Yeah. I want to say moving forwards in those next five years with all these AI tools, it just facilitates a lot more democratized nightmare, low ROI, terrible products because the, because the robot told me that’s going to become a problem.

And so in five years time, the human being is there to actually get us back on track and to do proper valid research that makes money. You see what I’m saying? It’s, it’s really not difficult, so straightforward, but because of the numbers of people that are involved and the amount of money that’s involved, everything gets owned and complicated and.

Look what happened to UX, right? UX became UX UI and market research. Those two things are completely unfamiliar to me as a 15 year UXer. So you’re all copying the stuff that we did so successfully, but you’re not doing anything close to the same things that we did. And more importantly, no one will listen to us to say, this is how you make money.

It’s not a good look. So AI will march [00:40:00] in and kill everyone if we’re not careful. So yeah, that’s 

Vy: where we’re at. And I think one of the things, if I could pull back, um, is something which, which is interesting. I saw your quote the other day, but something you mentioned too here, you said, if your role could be worked from home during the pandemic, then your role is a candidate for replacement by AI.

Is it because we’re dissociating ourselves from the actual, you know, workstation and being online all the time and we need that interaction? Like what was your kind of, you know, why they’re thinking on that? 

Nick: Yeah, yeah, yeah. So the, the kind of the, the deciding factor on that one is, are you a knowledge worker or not?

And in my, the, the distinction is, is if you are a knowledge worker, you probably could have worked from home because I can do it remotely. And it’s all through a camera and a keyboard. Whereas if you’re not a knowledge worker, you are a nurse, a teacher, a builder, a [00:41:00] delivery driver, whatever you’re doing, you’re part of the physical world.

Right. So, you know, you’re making a house, you know, that you, AI can’t do that. You know, AI for our lifetimes, isn’t really going to be a primary physician. You know, it might be a support bot, scary, scary, but it’s not, you know, not going to be the doctor or the surgeon, please, God. Um, You see what I’m saying?

And so the people that during the pandemic, you had like, we called them key workers in our world, right? In, in this country, people who were allowed to go out during lockdown because they had a job to fulfill. It’s almost like those people can’t be replaced by AI. Because they are physical world jobs.

Whereas everything else on the face of it, it seems like if, if it’s just intelligence, then, then in, in, in X amount of time, uh, quite a lot of the way of knowledge working will be handled by the AI, 

Vy: but that includes yourself. Myself. Yeah. Everyone [00:42:00] basically, to some extent. Um, and you know, I think my, my take has always been that immediately.

I already saw a demo from Coinbase CEO today where he showed his design team, the UX team, and how they approach prototyping and they use their Figma libraries and design systems on Figma and a plugin and. The plugin has a prompt. I think it’s, it’s driven by this, this startup. Um, I’m a familiar with, uh, called skipper, but you know, it’s not no affiliation.

I’m not recommending them or anything, but it’s just a fact and it allows designer to basically at coinbase to write up. Um, design me an app, a login flow and a plugin using AI LLMs basically captures the components which are labeled in a certain way and spits out a flow and that is labeled as a starting point.

Then, and that’s [00:43:00] already can then do what juniors could do in UI design, you know, from, from, if you saw historically what happened before, what happens now, it’s not proper UX by any means, but to me, this is like a prime signal of, okay, this is step one. Like this is where the knowledge workers, as you put it is where we’re going to start automating these knowledge workers.

And I think research strategy. Architecture end to end is probably going to be a bit later, but still like we’re already stepping into that realm where all, all, you know, not, not, not to kind of even spark flames here, but I think it’s rightfully so. All of the people who I guess were so focused on UI production should really start, you know, like this is a call to course correct.

Cause we need to somehow extend it. Um, 

Nick: we need to get back to basics. Okay, all of this, we’re running ahead into automated or supported [00:44:00] futures without having basic competency in the practice. So it’s like me learning, you know, I’m going to do my maths, I’m going to go straight to a calculator without learning the basic mental arithmetic.

Well, it’s really hard to use a calculator any effectively if you don’t understand the math concepts behind it. That’s kind of where we’re at. And that’s why when you sit somebody in front of a generative AI and they’re like, Oh, what do I do? And it’s like, yeah, of course, because you don’t know how to do user research.

And that’s why you’re now looking for a prompt to do it. It’s all backwards, all backwards. It’s the same template mentality. It’s just, it’s just, it’s kind of prompts are just templates. 

Vy: Yeah. 

Nick: Um, sorry, but Coinbase, by the way, I haven’t bought, I haven’t traded for quite probably three or six months.

Coinbase has had some horrific UX fails in there, really bad architectural IA type stuff with duplicated staff, meaningless sections, overlap, duplicitous journeys, some absolute horrors. So if, if that’s what their [00:45:00] AI generated stuff is creating. Is it good? No, it ain’t. But I don’t think anybody knows that because they probably haven’t evaluated it properly.

Vy: And that’s, I guess, we’re looping back to opportunity for evaluative research. I guess you, you are sensing like this big opportunity for evaluative research, right? And that’s where AI could help out. How has your evaluative research flow been so far? Like where, where are those ideas coming from? Are you actually starting to apply any of that or, 

Nick: well, no, I mean, irrespective of ai, right?

Because we have been talking about, we are in a very commercial world where value and money are very important, like critically important. If you can’t do it, you’re probably gonna lose your job. Evaluative. Like I said, forgetting ai, evaluative is where the value is. It’s how you move the needle. It’s how we make money.

It’s how we safeguard our jobs. That’s become really forgotten about in, in this market research UX world. So all I’m trying to do is shout about it because again, all of the coaches and [00:46:00] mentors and bootcamps and everybody, they haven’t talked about user centric design to date. So they’ve talked about wireframing and empathy and design thinking and all the other stuff that they’ve taught, but they never taught any concept around user centric design and evaluative is kind of a fundamental component of user centric design.

It’s just a fundamental activity. So I can understand that user centric design is a bit of an uncool term or it’s old and people don’t want to get down with it, that’s fine. What I, what people will want to do is save their jobs and income and careers. And with AI coming at them hard for generative, and I know that evaluative moves the needle for money, I’m just shouting for awareness, your mentors and stuff haven’t told you about user centric design because they weren’t there.

They’ve misrepresented their experience. So I’m telling you get evaluative and do it on your golden journeys. Nothing rocket science and do it immediately. And do it quickly and get good at it because AI is coming. [00:47:00] You need to generate money right now for your product manager and for everybody, just because we want to do good practice.

And we also want to defend against AI coming to take our jobs. 

Vy: I’m, I’m conflicted, you know, like, um, I, I like your ideas. I feel like generative is probably a bit more safe in my head, at least. The generative side of things. That’s why 

Nick: everyone’s doing it, man. 

Vy: Yeah. But, but it’s, it’s like, like we are so far away from doing it right.

That’s why I think it’s so safe because it’s, it’s even more vague. It’s even more. quote unquote, democratize. It’s even more, even more requires such a deep understanding of experimentation, how you approach hypothesis and how you synthesize the things and how you unbias the decisions. And, and, and even, you know, the specificity is lacking.

Whereas with evaluative research, I feel like in my head, at least thinking very technically, I can think of exactly what things would I need to tie in together from product strategy perspective to create a model. to [00:48:00] test a simple flow and use some sort of SUS or, or some sort of other usability scale at the minimum, like that could be done so quickly.

I wouldn’t even, 

Nick: honestly, I wouldn’t, I’m not a big fan of SUS, the system usability school, whatever it’s called, SUS. I hate it. I don’t, I really don’t like it. If you do, and that’s very unfair. I’m just, it’s just my personal experience only because like heuristics, it becomes a, a misinformation point. It becomes a bullshit, a fairy tale factor, right?

You don’t, okay, if anyone’s listening and, and is inspired and wants to do evaluative work, you don’t need. S U S or any other kind of rating scale. You just need to sit somebody down and watch them. Okay. When there is a problem, you know, okay. My style is very much. Here’s the thing, go use it. I’m going to try and stay quiet.

I’ll jump in when I need to. I won’t let you struggle too much, et cetera. Right. They need to struggle, but not for too long when they’re struggling, when there are those long [00:49:00] pregnant pauses, and you can see that they’re, they’re worrying that they’re trying to work something out or they can’t understand it.

You can gently prompt them. Not too soon. You need a little struggle time for react, because they’re thinking. But there’s an appropriate time where you just go, what’s going through your head right now? Or what are you thinking? And they’ll go, yeah, I can’t find the call to action. Or does that mean that or that?

Beautiful. That’s a lovely nugget of insight right there. It’s, it’s, it’s on the recording or you write it down, you move along, you get through that journey. You’re going to have a list of stuff. Didn’t understand the call to action. Didn’t like the copy. Couldn’t find this. Didn’t understand what happened when the page turned, whatever.

A million different usability type, interaction design stuff, or semantics, information architecture, journey level stuff. That’s gold. It isn’t the same level as kind of, I did generative work with 3000 people and everybody wants to buy blue. Right. It’s not as simple as that. It’s not as cut and dried. It’s [00:50:00] not as easy as that.

Right. In my mind, it’s a lot easier because you’re getting an aggregation, a collation, a collection of, of almost like binary outcomes. Like, can you do it or not? Whereas when you think discovery, it’s more attitudinal or, you know what I mean? And that’s when the arguments come in. Well, you only did it with 15 people.

I want to see 15, 000 before it’s got any kind of validity to it. No, the market research mentality. So in many respects, it’s easier because either you can do it or you can’t. I guarantee you for every single one of your listeners, if you were to take whatever your current live product is. Run five, 10 users through it.

Just like Norman Nielsen said, you’re going to get most of the problems. I’m not getting 80%, 20 percent in that BS. You’re going to find stuff that you had no idea about. And the most important part is your whole team and product manager are going to go, wow. And it’s that wow moment. They’re going to go, holy crap.

Why haven’t we done this before? Why aren’t we doing this every iteration, every sprint, because [00:51:00] it’s literally eyes closed, is closed. And once you’ve opened the blinkers. You can’t close them again. 

Vy: No. And I think there’s, you know, even in between the lines, I think one of the call to actions people can take away is also to almost choose themselves to do it because it’s very unlikely and maybe that’s me, but it’s very unlikely that technical project management managers who just got into the business, um, to do generative AI going to be focusing on that evaluative side.

I think it has to come from user researchers, you know, they have to start leading and maybe even upping the standards if they’re low as well. 

Nick: Have you ever watched a product manager or delivery lead or a designer even run a research session as a facilitate one, uh, either discovery or user testing, you know, generative or evaluative.

Um, They’re the same junior or schoolboy errors made every single time. You can tell when it’s an amateur and [00:52:00] you can tell when it’s a professional. It’s the same thing with like, you know, ask me to draw Spider Man and ask a designer or an illustrator to draw Spider Man. Mine’s gonna look like a stick figure.

And what the other is going to look like Spider Man because one’s an amateur and one’s a professional, one’s practiced, one isn’t. This idea that research is anyone can do it and you need no experience is irresponsible, naive, and stupid. It really is. Or let me put it another way. Anybody can do product management.

How about that? 

Vy: That’s a hot take. That’s awesome. Um, Nick, I think we, we do need to wrap it up because of a time, uh, boxed, uh, limits, but where can we direct people? And I also going to welcome comments because a lot of this session discussion points were based on the comments from the audience. So please leave the comments below on whatever platform you use, but Nick, where, where do people Where can we find more about you?

Nick: I’m pretty active on LinkedIn, obviously. Uh, I’ve got two conferences coming up. Uh, there’s one at UX Lisbon. I’m doing an AI, uh, workshop [00:53:00] for a half a day. But the, um, the other really interesting one is my signal to noise talk at a web expo in Prague, the last week of May. Um, so, yeah. Yeah. Keep your eyes out for that.

If you’re that side of the world, I would go. It looks like an amazing conference. It’s huge. It’s a really big conference. I think it’s 1600 or more people. Um, so yeah, between LinkedIn and conferences, um, and of course, right here on vice podcast, you know, uh, again, if people have liked this and I got so much good feedback from the first podcast, everyone on, thank you for all of that.

It’s been great. And it helps keep me going. Um, yeah. But if there’s a, like a, a live Q and a that Vi and I can do, uh, to kind of kick the tires on some of these, uh, issues. I know we’ve been all around the places, you know, there’s a lot of heavy interconnected issues that we’ve talked about today. Um, so if you want to get into some more detail or got something you want to talk about, I’d love to do that, or I’d be very happy to do that, but otherwise grab me on LinkedIn.

That’s kind of, that’s where I go to avoid [00:54:00] doing analysis work when I’m procrastinating. 

Vy: Awesome. All right. Thank you so much, Nick.

UPDATES

Experience designed newsletter

Twice a month I send out 5 new things you should know about: strategy, design and tech innovation, and always at least one out of the box thing. Additionally, a roundup of featured vaexperience content you might have missed.

Free, no catch, unsubscribe anytime.