This series is a showcase of individuals who are striving for positive social and ethical change in the blockchain and web3 space. The idea behind this project is to shed light on the insightful developments within this industry, and to promote overall flourishing within the wider community. This series will be a collection of interviews, led by thought leaders who are instigating such activity.
My guest today is Asad Anwer. He is a computer scientist with a focus on user experience and product design, who imbues his work with both his creative passions and his knowledge of psychology. He is the co-founder of ByteCorp, an AI consultancy company that has close ties to the field of web3. He is also a friend of mine.
In this interview, I asked Asad to provide insights on a myriad of areas, where he gave his intellectual input, as well as delved into ethical questions that are connected to the fields relevant to him.
I understand that you recently completed a Product Psychology Masterclass. How do you apply your knowledge of psychology to the field of user experience?
The connection has always been there. The first course I did that was related to design was called Human-Computer Interaction. Within the course, there is a Venn diagram that can explain how this connection is conceptualized. One circle involves engineering, one circle involves design, and a third circle involves psychology and behavioral science. And so human-computer interaction embodies all of these.
The importance has been highlighted to me specifically within the course I took. There are over 100 cognitive biases and psychological principles that are (and can be) applied in design to create a greater experience for humans. At the core of this, it must be understood that your designs need to be human-centric. If you are designing for humans, then you are bound to study how we behave and how we think.
The more you understand that, the more credibility you have, and you can create more impressive tools. To give an example of how psychological ideas can be applied to design, we can take a look at Hicks Law, which is the rule that the more options you give the user, the longer it will take for the user to make a decision. We all have experience of this, either with software, or in other circumstances. A common way this is experienced is when people go to a restaurant and are given a menu with a lot of listings.
This sounds so simple when it is said out loud, but it is not thought about by everybody. And this is just one of those cognitive principles or ideas that need to be considered. When it comes to the ethics of design, it is important to remember these as they can be used to fabricate or elicit certain experiences in a user which they might not want to do, or to create a response that the user might not have intended. This was discussed within the course I was in, where there was a recognition that we should not be using these principles to create such unethical designs or experiences.
This needs to be considered more often, because people highly, highly, under-emphasize the influence of design on human behavior. How we interact with our phones or our computers or other devices has a huge impact on how we think and how we make decisions, so it is important that we consider just how influential product designers can be on the psychology of the end user. It is necessary that you are mindful of the decisions you are making when you design something.
There is a great amount of responsibility involved when you are designing something, and it must definitely be considered. There can be an empathy gap between some designers and their user base, but this should not exist, because designers should recognize what they are doing. I would also like to give a huge shout Growth.design and their founders as their case studies on design principles have been extremely useful. Plus, they are not afraid to call out big names in the tech world who are using unethical design practices, or dark UX patterns, to get certain responses. They are making a huge impact.
What is the role of aesthetics in user experience design? To what extent do you believe the beauty of something is important to its utility?
There is definitely a connection between aesthetics and user experience. There is one psychological effect known as the aesthetic useability effect, which is basically the idea that people tend to perceive designs with great aesthetics as more intuitive and easier to use.
There are several aspects that people take in when they are using something, but the first is the visceral aspect, which is essentially the first thing that pops into your mind after seeing it. If something is visually appealing at first glance then it can have an immense effect on how people interact with it. This is by no means the only factor or the primary factor, but the way something looks plays a huge role in how people perceive the usability and ease of something. Aesthetics can also affect mood, so it makes sense to give people a pleasant experience when using a product by making their user experience look and feel good, too. Otherwise it would be unfair to people.
I understand that you and BtyeCorp have been working in the automated vehicle space for a while. Have you heard much about the debate surrounding the “trolly problem” regarding AVs? As an overview, this is essentially the ethical problem of where an automated vehicle finds itself in a position where it would potentially harm somebody (or multiple people) regardless of where it goes. The question then is then to decide who it will choose to potentially harm or kill. When working in this industry, have you found that the trolly problem is discussed much? And do you have any insights yourself?
There is a lot of discussion about automation and electric vehicles, but from my experience this question is not being talked about as much as it should. This is a very serious question, and it is a very necessary one for us to all be exploring as automated vehicles are getting more and more commonplace.
Now that I think about it, this is the sort of question that is designed to provoke thought, as I do not think this is the type of problem that has a solution. There is no right answer. We cannot even begin to truly quantify the harm that could be caused or lives that are saved, because each person has their own universe of thought and so the loss of one is still deeply important. We can try and measure them against the number of people involved, or how old they are, or whether they are parents or what their jobs are, but these discussions never solve the problem. And this is what is great about it as it definitely creates a thought-provoking scenario for us to all consider.
If only philosophy and philosophical discussions were black and white; life would be so much simpler! But sadly (I don’t know why exactly I’m saying sadly), all the course of life has to exist in this grey. And I think, so does the answer to this problem.
The world of tech is filled with so-called “dark UX” patterns. What do you think is the best way to combat these UXs from being created? Do UX devs need to be encouraged to think more ethically about their creations?
This is definitely a big question. The two areas of my studies that have given me insights into this topic are my course on product psychology, courses from the Interaction Design Foundation (IxDF), and a documentary called The Social Dilemma. In the documentary, there were several discussions raised about how social media affects the way we interact and think– as I was watching it I was thinking to myself “it’s a design problem, it’s a design problem”. And this is definitely the case. There are a LOT of big companies and projects that are utilizing these dark UX designs and it is causing problems.
There was this case study I came across where it showed how Amazon is using design to make people buy more than they need. Another example is how LinkedIn was fined $13 million for their dark patterns. However, nowadays we refer to them as dishonest and deceptive design. If you check out deceptive.design, you can find a whole hall of shame where they show companies that are making people do things that they would never have wanted to do.
I think one possible solution to this would be to educate designers on the consequences and the depth of the harm they can cause when they create these designs. Even at the lighter end of the spectrum where companies make you sign for cookie policies can cause huge problems. Many sites do not even let you see the website unless you say yes. Some places do not stop you from seeing the site, but they might block a lot of it out or make it harder.
If I am in a state of selective focus, so I have something that I want to do and I’m trying to simply do that, then I am going to ignore a lot of things, such as reading what I am agreeing to when it comes to cookies. I will just continue, accept, and move on because I am busy. But the agreement to use cookies for some services can be very dangerous and can encourage you to agree to a lot of things that you never normally would. The people who create these designs know this and are very smart with it. It could have been done in a more humane way, but they choose not to.
Companies need to be upfront and they need to be transparent about it. This is why the lack of empathy is so dangerous, as it leads to tricking users into doing things they do not want. It is similar to somebody pretending to be your friend or have your back, when in reality they are trying to take things from you or steal from you. Places need to be upfront about what they are doing so we can stay informed.
We make movies about potential robot uprisings like iRobot or The Terminator, where we imagine futures where machines hurt us or control us. But in the current day, we actually have machines that do control and hurt us and they are much more insidious than these ideas we commonly associate with. We spend so much time using devices and going on programs that already control us.
People need to realize this, and realize that if things are wrong by design, then they need to speak up. The designers and developers need to speak up. If a company like Instagram notices that people are spending hours on end on their platform, then somebody in the company needs to try and stop it. They want us on those sites for as long as possible so they can shove ads down us and make more revenue from us. But why can’t they just draw a line and say that a certain amount of revenue is enough for them to survive, and then make the effort to get us to stop spending so long on these platforms so that we can be human and go and do something else?
Netflix does it. Even though they also break some design rules in their favor, they do ask you “are you still watching?” That still tells you “okay, maybe it has been a while that I’ve been on this, so perhaps I should consider doing something else”. Even something small like that can be enough to nudge a person to simply wake up and recognize that their life is not just staring at a screen.
The implications of these companies applying these deceptive designs are huge. It distances us from our lives and stops us from seeing life for what it is. It encourages us to say “well, maybe I do not have much of a life, but at least I have 5000 likes”. But this is so wrong. They trained us to spend our lives on these services and then to make using these services integrated into our goals.
It is not supposed to be like this.
And now so many of those that make these social media sites and other applications no longer care that the situation has gotten this bad because they want to simply push their ads to you and make their revenue. People need to understand that if you are able to use a product or service for free, then you are the product. They are making money off of you by using you. Something cannot be a human-centric design if it is using you like this and if it is not using methods of allowing humans to actually be human.
It might be hard now to stop some of these companies, but what we need to do as people is be more mindful of how we interact with these applications and see if we can mitigate some of this behavior for ourselves. In our free time, we can work on our growth individually and keep a distance.
Are there any interesting insights you have had about the current wave of AI designed for consumers (ChatGPT, Midjourny, etc) in terms of UX and product design?
First off, I think that the world of AI, specifically ChatGPT and Midjourney is amazing. This is the kind of stuff that makes us truly believe in the technology, and makes us understand how significant this field is. This is not the same as the older technologies that we saw in the past where we had an AI trying to perform small tasks and sometimes getting it right, such as assessing whether an image contains a cat or a dog, and only occasionally getting the correct answer. That stuff is kinda impressive, but it is nothing like the stuff we see right now with the new wave of AI.
And it is just the start. People said in the past that previous technologies were just the start, but now it feels like the field has really started and that this is the real beginning.
There is a lot of fear with these technologies surrounding certain jobs and careers. I remember coming across an article that argued that AI would replace artists. Artists?! Seriously? This is something that I totally disagree with. If you say that an AI model is going to replace artists, then what is art in your perception? How do you conceive art? What do you think when you see or experience a piece of art?
It is all around you. It has no rules at all. This is one of the fundamental ideas separating art and design. In design, there are certain rules that are involved that you must adhere to. But with art, there is total subjectivity. With that subjectivity, for me, there has to be a person. There has to be a consciousness. For me this is what makes art meaningful. It is about perceiving those different perspectives. You can even look at a piece of art as something that you like when you first experience it, and in a few months, you can look at it completely differently because that is how quickly your life can affect your view of art.
And I know that what some models like Midjouney are coming up with is derived from some truly amazing artists, but it is only deriving actual meaning from what you are telling it and ascribing to it.
What these AI models create is not subjective by itself. It follows rules. The response to the output can be given subjective if you give it to different people. But that original meaning that you find in human art, that original meaning that must have been in Starry Night, that must have been in the Mona Lisa, is just not there. Even if it comes up with something that looks like those works of art, I will always be thinking at the back of my mind about how it is not human and so it does not have that original meaning behind it.
This is the same with writing. There was this general response some people had to ChatGPT which was to question why we would ever need copywriters or content writers, but this is not an argument! You would always need writers. You would always need someone with specialized knowledge that AI cannot achieve yet. For instance, you are an amazing writer, an AI cannot replace that. Instead, we should be talking collectively about how we can utilize these AI models to help aid us and make our lives easier.
Unless there is something very, very, very repetitive, involving manual labor, there cannot be an AI replacement. AI cannot replace the act of writing or making something profound. Even if there is a lot of data accessible, a human will still always be able to come up with something an AI cannot.
This should not be viewed as a fight. We should be looking at how we can use AI for our work to get better or easier. We should not be thinking about what professions are not going to be around, we should be thinking about how they can help those professions and careers.
Sticking with AI, there is a question about the way these AI models present themselves to the public, and how that may be causing a level of anxiety with some artists. A perfect example of this is MidJourney, where people write prompts by prefixing them with the word “Imagine” in them. What are your thoughts on that?
When we talk about design for models like this, where you are dealing with many types of audiences and people, we should take an exact approach. We should say things, and call things, as they are. It is not right for a word like this to be used if it is taking data and inputs and using them to create an output.
And it cannot be justified by saying that it only worries one audience; artists. These models use the work of these artists, and so their opinions and their feelings are important. This needs to be made right. With what I know about design, and my own experience, I can immediately see why this would create so much anxiety among artists. If I was working on something like this I would research hard to find out what words and language could be used to reduce this anxiety.
I have a slightly different example that connects to this. There is a company that sells a lot of artwork, which has recently set up a section on their website for selling AI artwork. And they are more expensive than the works of humans. Why? That is not fair. You need to accredit every artist’s idea that was used to create that image and you need to treat artists with the respect they deserve. We need to understand the importance of respecting and remembering our audiences.
Connected to this discussion of AI in the future, there is one big concern I have, which might actually be a fear from inside me, which is to do with not being told whether we are talking to an actual human being or whether we are talking to an AI. I don’t mind the conclusion that I am talking to an AI, but I would always need to know whether I am or not. I think, for me as a human being, I would need to know so I can consider whether I am talking to something which has data on the whole world, or whether I am speaking to one human being. In some cases, I see this as more important than what is being said.