cft

Psychology and UX — not what I used to think

The Imprint


user

Vitaly Mijiritsky

3 years ago | 6 min read

Tediousness

During my college freshman and sophomore years I made my living translating academic papers, mostly English to Hebrew. Probably the first thing that strikes you when you first lay eyes on an academic paper, is that each and every claim must be backed by a reference (Mijiritsky, V., 2020).

And if there’s just the one reference, that’s a sure sign that you didn’t review your literature properly (krk, 2014), you have no idea what you’re talking about,

and in all likelihood you have a tendency to make up wild claims like that of the Sun rising in the East (says who exactly? Has it been peer-reviewed? Was there a control group? Why is n=1?), and other such nonsense. Each single factual statement requires a source, otherwise it won’t make it past the first review.

This unrelenting tediousness at first feels quite ridiculous, then you get used to it, then you start to mistrust unreferenced claims, and in the end, true to your academic Stockholm syndrome, you can’t bear to think of a less rigorous format.

It’s taken me some time to realize that this format is actually just an expression of a much bigger phenomenon, of the general method that rules the scientific-academic hallways.

A method that says — question everything. Always be in doubt. Take a magnifying glass to each statement that’s used to build a case.

Extreme systematic skepticism. Is the statement true? If it is — is it relevant? If yes — does it necessarily lead to the asserted conclusion? And so on and so forth.

This bunch of questions, along with some others in the same vein, are described by a single term used in every psychology classroom around the world, undergraduate, graduate and doctorate alike. It’s called validity, and it’s simplest definition is this: “the extent to which a test measures what it claims to measure”.

Photo by Maranda Vandergriff on Unsplash

The Man With The Hammer

I was trained as a cognitive psychologist (M.A.). When I first got into UX, about 15 years ago, it didn’t look anything like it does today.

The tools and technologies were different, the volume and the quality of the knowledge accumulated in the form of design patterns and best practices wasn’t even on the same order of magnitude as what we have today, and I believed that the ideal background for a UX person was cognitive psychology.

This was a very widespread opinion, and I still believe it was true — for that time and place.

The proverbial man with a hammer looks at every problem like a nail. It’s taken me a lot of time to realize that occasionally the cognitive hammer may be put aside.

A UX problem can be analyzed in different ways. I’ve seen visual designers analyze problems through visual-design eyes, and solve them, successfully, through visual-design means. And I’ve seen psychologists do it via psychological means, and I’ve seen developers do it using an engineering approach.

And often they even arrive at the exact same solution but each through a process of their own, because everything is connected.

Some cats can be skinned in many ways, others probably not. I’m not really comfortable with that expression, so — some UX problems are more suitable to certain approaches than to others.

Some aren’t. I believe that today, the ratio of UX issues that cannot be solved without advanced knowledge in cognitive psychology, and which aren’t already addressed by tried-and-true design patterns and best practices (which may have required mastery of CogPsy to develop originally, but now are available off the shelf) - amounts to maybe 5% of the total landscape of UX tasks and challenges.

And they tend to come up in highly complex expert systems with a high error cost (in resources or in lives). Of course, within that field itself the number would be much higher.

That’s for advanced knowledge. And basic knowledge can be acquired in a setting that does not require postgraduate studies. It won’t be of the same breadth or depth, but it will be enough for most simple cases — and most cases are, statistically speaking, simple.

However…

Photo by Andrea Lightfoot on Unsplash

The Imprint

I teach UX, so I get to observe dozens of students from varied backgrounds through the course of several months per class. In each class we have a few students coming with an undergraduate, often graduate, degree in psychology.

The one thing that comes much more naturally to them than to other students is this self-discipline regarding the methodological quality of their work.

This discomfort, imprinted upon them in their psych lecture halls, over making arbitrary decisions based on “I think..” or “it seems to me…” or “I saw somewhere online…”. They are concerned with the notion of validity — in most cases unconsciously so — both during the research and the design phases.

This isn’t some kind of a secret art, it’s not exclusive to psychology students, and we try to teach it to everyone, but psychology majors do arrive already equipped with it.

This awareness is not only required when you do “serious” or “formal” research, and not even when you actively define your current activity as research. It’s true for every question that’s being asked explicitly or implicitly, for every piece of data that goes into designing the product.

Each pixel we place on the screen needs to be answering a question that needed to be asked.

The scientific-academic background with its tedious rituals makes you develop a habit of following this process unconsciously, and of asking these questions well and properly — which is the hardest part, the absolutely crucial part, and the woefully overlooked one.

Validity is not a term widely known in the UX community. But the term is not what matters. What matters is the scrupulous methodological awareness to the importance of the exact way that we get our data. Data which then serves as the foundation to our products.

This lack of awareness is read loud and clear whenever you stumble on a UX debate in an online discussion.

  • You see it in the obsession over A/B testing as the definitive magical answer to any UX dilemma, since it seems to be “revealing the truth”. While A/B Testing is a wonderful tool, it must be understood thoroughly in order to use it effectively. Otherwise you’re basically administering a thermometer to a patient complaining of a sprained ankle, and then saying “ah, but look, it says you’re fine”. (Most tests fail — 1,2,3, and even the successful ones are often meaningless).
  • Or in the “why don’t you just ask the users what they want” approach, which never fails to come up in any beginner-level UX debate as the most natural thing in the world. The proper professional response to this should be a teeth-grinding cringe, as it violates what’s been known since 2001 as The First Rule of Usability, and had been deemed obvious for many years prior to that.
  • Or in a general obliviousness to the principles of an experimental setting and what it looks like. So that a researcher posting a request to click a specific button on a screen (while she is recording and analyzing mouse movements behind the scenes) won’t get dozens of responses saying “I found it in a second and a half”, “What’s so special about this button” and “Why look for problems where there aren’t any”.
  • Most of all, it has to do with professional integrity, with the ability to identify your working assumptions and their limitations. No research is without threats to validity. So when you present your findings, these threats must be disclosed to the decision makers. This lets them treat this data in the most appropriate way, maybe launching a followup research to control for some of the artifacts, or at least to learn for next time. Otherwise — in the best-case scenario you don’t get better. In the normal-case you are just treading water, and in the worst case you’re actively damaging your product.

As I said, none of this is rocket science. The dry facts and the terminology can be learned easily. And, just as we did with design, we’ve already accumulated a trove of best practices and dos and don’ts for our research methods.

But the same as any other skill, even if the theory is easy, it takes a lot of experience to turn it into second nature. And this experience is exactly what psych majors are acquiring when they sweat over those research papers in college. They’ve been imprinted with the right methodology to do UX.

Upvote


user
Created by

Vitaly Mijiritsky


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles