تجاوز إلى المحتوى الرئيسي
User Image

Ghada Mohammed Alsebayel

Demonstrator

Faculty

علوم الحاسب والمعلومات
College of Computer and Information Sciences, Building 6, 2nd floor, Office 6
مدونة

Persuasive Technology, Could it be Ethical?

Image of a scientist wearing a lab coat and introducing a mobile phone to a person in cage.

I was only 17 when I created my first Facebook account. That day marked the beginning of my social media journey, or should I say obsession? At that time, I was getting ready to go to college. Anxious about adapting to a new environment and making new friends, I started randomly chatting on Facebook with people who were planning to attend the same school as I; We talked about our hopes and dreams; we shared our doubts and fears. Eventually, I ended up making a lot of friends long before the summer vacation even ended. This model of interaction – in the beginning- was weird to me. I wondered how could I come to know someone too well long before I ever met them? How could I  have a deep conversation with a stranger who would then become my best friend? Not the other way around! Technology has indeed changed the way we live as well as  the way we do business.

Alvin Toffler referred to technology as “The Great Growling Engine of Change”, and he was right! Technology is Impacting our lives from simply the way we interact with one another, to deeper more permanent meanings including;  the way we perceive ourselves and the world around us, our psychological and mental health, and our morality and social values and norms. However, the great growling engine of change has had many painful consequences for its victims, often a byproduct of progression. Some of the most devastating side effects studies have associated with social media and intensive technology usage are the rise of depression, alienation and shockingly self- destructive behaviors. One recent study has detected increases in depressive symptoms, suicide-related outcomes, and suicide rates among United States’ adolescents between 2010 and 2015 and has linked it to Increased new media screen time (Twenge, Joiner, Rogers & Martin, 2017). An extreme example of the kind of harm that social media can expose young people to is “The Blue Whale” challenge, a game consisting of series of tasks players must complete before moving to the next level. This perverse game culminates to players taking their own lives (“Blue Whale: The truth behind an online ‘suicide challenge'”, 2019). The Blue Whale is only one of many examples (such as slender man and the Momo challenge) of how manipulative and controlling social media can be.

There is a great deal of literature on the unintended harms of technology and social media (see the annual conference on the unintended consequences of technology,2018). However, a relatively new, less addressed question would be: If technology has the power to influence people’s behavior and attitude so strongly, how could it be used actively and intentionally to reinforce positive and life affirming behaviors and values?

In order to fully comprehend the implications and future potential of technology, we must develop a better understanding of our relationship with technology. Peter-Paul Verbeek, a Dutch philosopher of technology and the chair of the philosophy department at the University of Twente, is one of the researchers who aspire to address the question of What is technology in relation to humans? He argues that technologies are not passive object to be used by a human subject. Rather, technology mediates human-world relations. Verbeek refers to it as“Technological Mediation” theory. Technology helps us shape the world by actions and practices we carry, and technology participates in shaping our world by forming our perceptions and creating our experiences (“Mediation Theory”, 2019).

Verbeek bases his theory on a useful taxonomy introduced by Don Ihde, an author and a philosopher of science and technology. Ihde, with his “post-phenomenological approach” distinguishes four types of human technology interactions: Firstly, there is the Embodiment relation: where humans and technology are combined to perceive the world similar to when you use a telescope to look at the stars rather than looking at the telescope. Secondly, the Hermeneutic relation: where humans depend on technology to perceive the world similar to using a thermometer to calculate the temperature. Thirdly, the Alterity relation: where humans interact directly with technology similar to using the ATM machine to withdraw money. Lastly, the backgroundrelation: where technology is in the background and we are not aware of their existence in constant manner similar to an air conditioning system (“What can we learn from Don Ihde?”, 2019).

Even though the four modes of human-technology interaction introduced by Ihde is so useful in decrypting our complicated connection with technology, many modern technologies cannot fit completely within only one of the four categories. Technologies nowadays can be more intimate than embodiment when it becomes a part of our bodies, similar to an implanted heart pacemaker. Also, technologies aren’t just in the background as it is providing smarter, more complex context, similar to a smart fridge that knows what is inside of it and can suggest recipes or a self-driving car that can figure out the shortest route to one’s workplace (“Mediation Theory”, 2019). Even a basic website is nowadays smart enough to engage with its users on a whole new level; addressing their subconscious and pulling persuasive tricks on them. More than ever before, technological instruments are becoming tightly integrated in our lives, highly advanced, intelligentand more influential.

Stanford University has been supporting a “Persuasion lab” since 1996. BJ Fogg, the founder and director of Stanford persuasion lab, devotes his time to a multidisciplinary study of behavioral change, motivation, ethics, persuasion on one hand, and technology design, program analysis on the other. He is the founder of Captology (Acronym for Computer as Persuasive Technology). In 2003, Dr. Fogg published the first book in the field of Captology “Using Computers to Change What We Think and Do”. Fogg mentions in his book, that when he first disclosed his ideas and research attempts, he faced varying responses. Some of Fogg’s colleagues were upset and concerned of the potential misuse of his work, others went to the next level boycotting him in conferences and claiming his research is immoral in peer reviews. In contrast, other people were excited for Fogg’s work because they saw the potentials to deploy it in marketing and sales increment(Fogg, 2011). Nonetheless, the topic remains controversial, so rather than debating whether it is moral or immoral to use persuasive technology; I intend, in this article, to turn my focus to exploring the possibility of utilizing persuasive technology to enforce positive social impact. A Tech person who is interested in developing a framework for how technology should “ethically” drive the thoughts and actions of billions of people from screens is Tristan Harris, a former design ethicist at Google. Harris Argues that it is not enough to be aware of the existence of persuasion tricks within interactive applications for users to prevent or limit an application’s impact. In other words, Harris claims that users have only two choices, either to be online, distracted and vulnerable to persuasion or offline, isolated and scared of missing out. It is the responsibility of designers to create a state in the middle where users are online yet have the choice to be fully able to prevent any persuasions or distractions they had not originally want. Designers of social media and tech software are responsible for creating tools to assist users to prevent persuasion since persuasion works on deeperlevel of our subconscious that we as users are unable to prevent on our own. Harris, in a TED talk, mentions the simple example of a chat application; you turn on your laptop with the intention of working on a project, then a message pops up and you cannot resist but opening it which then distracts you from the work you were originally carrying on. What if, the designer of the chat application added a feature where you can decide beforehand if you do not want to be interrupted for a certain period of time? a tool holding any message until your work is done?

Harris argues that such a minor addition is in fact a reflection of something profounder; a deeper design goal. Typically, a design goal for a chat application would be “to send messages quickly and easily”. What if, in contrast, the design goal was more reflective and higher in terms of human values, “to establish high quality communication and relationship”. By upgrading the design goal, Harris proclaims, we will be able to flourish our values through technology rather than diminish them.

Harris insists on the urgency for tech people to think about our human values and to work relentlessly to align technology with our humanity. He helped organize a meeting between leading IT designers and Thích NhấtHạnh, a 91 years old Vietnamese Buddhist monk. The purpose of the meeting was to address to questions like:what are our human values? How would the face of technology look like if we designed for higher goals? I believe that such conversations are no longer a luxury but rather are becoming a necessity. I believe that the “solutionism” mentality, which refers to the belief of having “good” solutions to every problem, had led Silicon Valley astray. In other words, efficiency is no longer enough when developing applications and social platforms for users, we need to consider morality and human values as well. It is time for Silicon Valley to consider the big picture.Upgrading the design goals is only the beginning for a world where technology aligns with our humanity. It is only the beginning to answer to the question of how to employ persuasive technology to promote ethics and positive social influence. Yet, more work is required to parallelizethe speed of technological advancement with the construction so policies and guidelines.   Another crucial “HOW” yet to be answered: How to develop a framework to govern the utilization of persuasive technology to prevent malicious usage?
 

References

• Twenge, J., Joiner, T., Rogers, M., & Martin, G. (2017). Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time. Clinical Psychological Science, 6(1), 3-17. doi: 10.1177/2167702617723376

• Blue Whale: The truth behind an online ‘suicide challenge’. (2019). Retrieved from https://www.bbc.com/news/blogs-trending-46505722

• Frank, A. (2019). Managing the Unintended Consequences of Technology. Retrieved from https://singularityhub.com/2018/11/20/managing-the-unintended-consequences-of-technology/#sm.0000duhw37366eem10sb4qorar0dh

• Mediation Theory. (2019). Retrieved from https://ppverbeek.wordpress.com/mediation-theory/

• What can we learn from Don Ihde?. (2019). Retrieved from https://www.futurelearn.com/courses/philosophy-of-technology/0/steps/26324

• Fogg, B. (2011). Persuasive technology. Amsterdam: Morgan Kaufmann.

• Distracted? Let’s make technology that helps us spend our time well | Tristan Harris | TEDxBrussels. (2019). Retrieved from https://www.youtube.com/watch?v=jT5rRh9AZf4
* Image cridet to The Economist 2016