The Decentralists

Episode 11: The Psychology of Identity with Dr. Agnieszka Rychwalska - Part 1

January 13, 2022 Mike Cholod, Henry Karpus & Geoff Glave
The Decentralists
Episode 11: The Psychology of Identity with Dr. Agnieszka Rychwalska - Part 1
Show Notes Transcript

To open Season 4 of The Decentralists we are joined by a very special guest, Dr. Agnieszka Rychwalska, Assistant Professor of Psychology and researcher at the Center for Complex Systems and New Technologies at The Robert Zajonc Institute for Social Studies at the University of Warsaw.

Among other things, Dr. Rychwalska studies the application of complex systems in psychological and social processes (modelling and analysis of social networks, experimental study of social impact mechanisms, analysis of large data sets from social networks mediated by new technologies), in particular to study the impact of new information technologies on social processes.

Data accumulation by a few global actors, surveillance capitalism and manipulation of public opinion – all leading to greater social inequalities – are, in a way, a result of the fact that individuals share so much about themselves online. Is this a result of some unchangeable human trait? Or is Big Tech, forcing their users to divulge too much about themselves in order to make money from advertising?

Why do people seem so willing to share so much of their identity online, especially on social media platforms?

What are the dangers of creating multiple identity profiles online in order to be social with friends and family?


Join us this week for part one of a discussion about the psychology of identity and why it is important for all of us to understand who we are and how we identify ourselves online.

Henry : Hey everyone. It's Henry, Mike, and Geoff of the Decentralists, and we have a very special episode this week. It's called The Psychology of identity with Dr. Agnieszka Rychwalska. So, what's that all about? Well, data accumulation by a few global companies, surveillance, capitalism, and manipulation of public opinion that all lead to greater social inequity are a result of the fact that individuals share so much about themselves online. Is this because of some unchangeable human trait or is big tech forcing users to divulge too much about themselves? Just to profit from advertising? Dr. Agnieszka Rychwalska is an assistant professor of psychology and a researcher at The Center for Complex Systems and new technologies at the University of Warsaw. Dr. Rychwalska's research lies at the interface between complexity and social sciences, in particular, the psychology of social impact and neuropsychology. She specializes in the application of complexity to the study of the relationship between the brain and cognitive processes. Her second area of interest is studying impact of new information technologies on social processes while working on her master's degree and doctorate. She cooperated with the biophysics research division at the University of Michigan and the center for complex systems and brain sciences at Florida Atlantic University, Dr. Agnieszka Rychwalska. Welcome to the Decentralists.
 
 Dr. Agnieszka Rychwalska: Thanks Henry for introducing me. Thanks for having me here.

Henry : That's great to have you here. So, let's start out with the most basic question, neuropsychology. I mean, it sounds fascinating, sounds complex. What exactly is it?

Dr. Agnieszka Rychwalska: Well, neuropsychology is the you part of psychology that tackles the issues of the relation between brain processes and social processes and including all psychological processes. So, it's basically studying the brain function and trying to relate it to any overly visible behaviour of individuals, and I started as a neuropsychologist because I was interested in the process of consciousness, and of how it forms in the brain? How it works? And how it can be traced down to the function of neurons in the brain? And that's how I studied the complex systems because these, the tools within the study of complex systems are we to act, actually do a lot of formal analysis of psychological processes that are usually only started qualitatively in the social sciences.

Henry : Okay. So, explain to us what you mean as well by complex systems?

Dr. Agnieszka Rychwalska: Well, complex systems are, I mean, they are everywhere, pretty much complex systems are any sets of elements that interact with each other and whose, let's say system state changes in time. So, it's anything from your, you know, the biology of your cell, where different, multiple elements interact in real time for example, express genes and so forth, like up to the brain where you have those millions of neurons interacting with each other. But of course, that also includes a social system starting from let's say the diet. So, just to people interacting up to whole societies and our globalized social, our techno social system right now.

Mike : Wow! That's really cool. So, what's your saying, is that by studying kind of I want to say maybe electron patterns or things like this, like, you know, going these patterns of electricity in the brain, it's kind of the same type of thing as studying how say ideas and things transit through a society.

Dr. Agnieszka Rychwalska: The thing is probably, it's not exactly the same, but we can use very similar tools to study both of those. So, the complex system science provides you with a set of paradigms and methodologies for investigating any complex systems, and what is interesting is that we find a lot of similarities, let's say those global qualitative similarities between very different systems. So, the brain and the social system as well. So, for example, we have this kind of small world network properties when we start the interactions of different regions within the brain and similar, you know, so network structures can be found also in any social system. So, this is a very interesting pattern and this is a very nice area of study where you really amazed as those similarities.

Mike : Actually, I'm recalling a similarity right now where Agnes asks you what you just said is it sounds a lot like the way they describe, AI and algorithms on social media, right? I mean, they're not, I guess studying, say behaviour per se, but presumably, you could look at, say a Facebook newsfeed with 2 billion people in it as complex news, a complex system, right. And then these predictive algorithms that Facebook and these guys used to deliver content to people. It's there some, is there kind of a parallel between studying the systems that you're studying, the way that you want to study it from a traditional academic kind of perspective, and then it being turned into something that makes, or that can be used to do a predictive system, like an algorithm for a tech company?
 
 Dr. Agnieszka Rychwalska: Well, you are absolutely right. I mean, there are a lot of similar similarities, but there's one huge difference, which also impacts the methodologies used in science and in let's say the practical application, of those kinds of predictive algorithms, and that, is that in science, we aim at explaining things. We want to understand how something works. So, build, let's say this so-called mental model of a system. We want to know what, I mean, what kind of human traits impact some kind of behaviour and let's say expression on social media. Well, for the predictive algorithm, the prediction accuracy is what they are striving for. So, they can, for example, in their models, they can use thousands or tens of thousands of variables like futures, that they use and the input into the algorithm, and they don't care, which one of them is the crucial one.

Right. As long as it works, I mean, it predicts well, that's okay for a scientist. I mean, we don't use those kinds of huge models. We really like to kind of bottle down to, let's say a few variables whose relation to the things we want to explain. We can actually trace down and understand, and this is actually has a kind of many implications because if you understand how something works, you can actually try and modify it in a way that is planned by you. So, you can kind of plan the future behaviour by changing something. Right? So, like, if you understand how a car engine works, and you want to make it better, right. Then you can make, let's say to targeted improvements at certain points and you know, that they should work fine. Now, if you only focus on prediction, right? Yeah. Then you are really only based on past data and it's very difficult to make a planned change that will result in some kind of changes that you actually want to have.

Mike : Right. So, it sounds like what you're saying there, and correct me if I'm wrong, but is one of the dangers of building and trying to refine accuracy on a predictive algorithm, right. That is meant to predict behaviour is that you focus on the Destin, right. I want to get more accurate to this endpoint, and you don't pay enough attention on how you get there. So, presume, like potentially this could be part of the reason why they're having so much trouble controlling these algorithms. You know, if I would assume that if I'm concerned with all I want is when I post this post, I want as many people as possible to like it or forward it, or do whatever, right. Create as much exponentially, more content and I don't care how I get there. You can kind of see how you may end up in these things like, oh, I don't know, accusations of genocide it and stuff like this.

Dr. Agnieszka Rychwalska: Well, yes. I mean, there's a lot of this kind of, you know, let's say side effects of focusing solely on prediction, right? Because you are optimizing only for one thing, and you don't care about your understanding of the process, but I think there's also dire consequences when those algorithmic, you know, systems are used as expert systems, for example, in this is called preventing policing or at any kind of human resources division in a company where they, you know, of course, like if you are based on past data with this algorithms and you don't understand why a certain prediction is of a certain type, then you just hire whoever, you know, scores the highest on your futures and your algorithm tells you to do so. Right. And you might just simply replicate existing biases in the data. When you want to change something, you have to understand which of those futures that you are using for your predictive system, which are the important ones and maybe you want to kind of scale them down if they are replicating the biases, right. So, if you want to change the employment system in such a way that there are equal chances for, let's say all ethnic groups and minorities and so forth, then you have to understand this. So, be able to introduce such a change.
 
 Geoff: I think human resources is a good example because in those, in that sort of prediction where you're trying to determine what is possible behaviour of employees, who is going to quit, who might become a whistleblower, who would be satisfied with not getting as opposed to somebody who would quit, if they don't get a raise, that sort of prediction, which, which I would argue is still quite, gross in the sense that it's not perhaps as razor sharp as HR and managers would like, but nevertheless, I think that's a little bit different than social media prediction, which tends to be more of a scatter gun approach. I think we do like to attribute, a lot of evil to social media and perhaps rightly so, but I often feel that the algorithms that are pushing content and promoting are not terribly sophisticated because continuing to drive engagement by pushing those centers of the brain that keep people angry is not terribly difficult. So, that isn't to say that billions of dollars, both of computing power and of development work, isn't done to do that, but I think it's a little bit different in terms of what Facebook is doing versus some of these learning systems in HR and elsewhere.

Dr. Agnieszka Rychwalska: Well, I would say that there are different risks associated with both, right? I mean, those systems that work for smaller companies, they also have usually less data. So, they can also be more sophisticated because they probably require less computing power. The more data you have in theory, you have more possibilities, but it also requires better algorithms and a lot of computing power. So, yeah, we agree that some of those algorithms online do seem pretty simplistic. Right. And you can actually game them. Right. So, we can try to have fun with that and see what you get.

 Mike :
Oh, you got to teach me how to do that. I want to gain Facebook.

Geoff: I do that and, it's quite interesting to game the system and...

 Henry :
It's, Geoff's hobby actually.

Geoff: It’s sort of my, to say it's a hobby is to overstate it, but to go on Facebook and just be, you know, for example, I mean, ultimately Facebook is, is trying to sell things. So, the ads that they show me are based on this desperate attempt to determine who I am to determine what they should show me and they've never really been successful at, at showing me an ad of really anything that that would be of interest to me and the content that they try to stimulate into my feet is sort of all over the place. So, yeah, so gaming, it's a great point. Gaming the system is, you know, can be good fun for sure. Especially in our line of work where we're trying to learn what these systems are trying to do and how to break them in to figure out, you know, if they are truly as sophisticated as people claim they are.

Dr. Agnieszka Rychwalska: Well, the thing is that you know, even if they are not yet as sophisticated as we feel, let's say, they might become in the future. Right. And changing this area is happening pretty quickly. So, I wouldn't kind of, you know, write them off because I think there is still planted danger, involved in there. And also, I would like to point you to one other area, because if we think about those two examples, like those smaller companies, or certain areas of application that are using data and algorithms and comparing them to phase. But I think the risk is that there will become some points where those things touch. So, for example, when public services are, who are, for example, deciding, some you know, administrative unit is deciding whom to give, you know, welfare benefits to.

They might not be able to do them themselves and have the capacity, in terms of both HR and competing power, but they will outsource it to someone and there's a few players who are just already having so much data that will be the first choices and here comes the problem. I joining the needs, for public services, with the power of private sector, in certain places of our [inaudible 14:27], it can be dangerous. We, we can see it in let's say the face recognition, and those kind of, you know, there's already a lot of public debate and with, you know, employees is walking out of Amazon or Google, to protest those companies, providing those services to the government, right?

Geoff: Yeah. The public sector comment is interesting because you frequently hear particularly from those on the right, that the government should be run like a business, and really the goals of government and the goals of business are very different. That isn't to say that there isn't value in business case analysis and trying to see what the return might be on public investment in certain social areas, but it isn't, you know, fundamentally the two are very different and it's a gross simplification to just say that you can take data accumulated in business and then apply it to government. I suspect it will fail more times than not.

Mike : You know, I have a, I kind of have a, maybe a weird question. I don't mean to put you on the spot here, but the way I've always thought about the way that a lot of these folks, whether it's Facebook or Twitter or Amazon or Netflix and stuff. When they talk about their algorithms, the recommendation algorithms, I think they call them or whatever. They talk about this idea of when I go into my Facebook feed or my Twitter feed or whatever, and to Geoff's point, I see the ads or whatever the case may be, that algorithm has looked as theoretically looked at my last at my behaviour and is kind of trying to show me stuff. It thinks I like and when I think of something like that, and I think of say an algorithm that is a, like, combining through my metadata in an effort to determine preferences, right? So, to me, when something comes into my timeline, it's like a knife. Did they put it there to just suggest me and nudge my perception in a direction or did they put it in there in response to a proactive indication that I was looking in a particular direction and I really don't see the difference, and I'm kind of wondering like how that would be, how neuropsychology or something deals with this idea that at what point is it influence instead of prediction?

Dr. Agnieszka Rychwalska: Oh, actually I think there's a lot of right now studies that want to, you know, both kind of predict what you will do given certain context, and then providing that context for you to act in and so, that they can actually predictions can come true. So, they are also creating those condoms. Like, for example, showing you that advertisements for some things like, so they are predicting your back view from your previous buying behaviour, you know, viewing or just surfing around the net. They're predicting what kind of ads you would risk bond positively too. So, there's high chance that will at least click on that and possibly buy the product. Right. And they are then creating the context for you so that you can act on this and I think this is the problem, right? I mean, this prediction in kind of in practice means that they want to manipulate you of course, and not want to manipulate your behaviours.

And that's what the advertise that are actually paying for, right. Not only, you know, for the fact that you will be happy seeing a certain, ads or a certain picture or a certain movie, but also that you will be willing to act upon it. In her book, [Inaudible 18:09] has brought up an example, the book on surveillance capitalism, an example of Facebook advertising in, I think, Australia, to, of course, the advertisers, their clients, the ability to, pinpoint where students were just in their exam period, right. And that they were more willing, to buy things after they have had success in exams. Right? So, this is a kind of like predict based on existing data. They are predicting the moment in which they are in this specific, you know, state of mind, right. Where they are because they're succeeded. Right. And that's the moment where they want to, you know, they advertise that they can pinpoint this moment and therefore if you place an appropriate ad and that's a particular moment, you can manipulate them into buying stuff that they possibly don't need. Right. So, these things are obviously connected with each other.

Mike : Clearly all a lot of this is based on the fact that we've now got, I think when was Facebook founded like 2004 or something like this. Right. So, we've now got kind of 15 years of I guess, increasingly more information being put into these platforms. So, I mean, what role do you think if Agnes scale? Like if you look at what Facebook was originally intended, kind of for, it was meant to be a place where I think originally it was meant to be a place where guys were trying to identify the hottest girls in college. That was Zuckerberg's original plan, but then it ends up becoming basically like a scrapbook, right. Where I can post all my stuff and people can look at it and things like this and now, we've 2 billion people on this thing. So, I mean, what role do you think an algorithm played?
 
 Because to me, that's almost like an opioid crisis kind of scenario in tech, you know, where this thing that was intended to be, you know, ah, it's just a scrapbook, it's a place to show your pictures and connect with your friends and family has now got 2 billion people and does a hundred billion a year in advertising revenue, and I guess what I'm wondering is why are people so willing to give up so much of their identity and their data and all of this stuff to these things? Do they just not know?
 
 Dr. Agnieszka Rychwalska: Well, I think unfortunately some of them do not know, but I think the Warnes of the problems with this kind of, let's say social self-presentation are growing so, right. thanks for example, to such movies as Social Diana. They're getting, closer to, let's say the average, social media users, but still, I think most people are not aware of the consequences, but they are actually trying to fulfill their psychological needs of social interaction. And I think, at the beginning, I think there was a lot of hope for, you know, this kind of social media of presentation. To have an opportunity to self-present, to be just your image, right? So, like this kind of image management online that could be, let's say a little bit better than what you can present offline.

So, people kind of jump on this occasion and, and use it to, to its fullest. But of course, then there was this moment when certain people realized that you can capitalize on this kind of surplus data, right. Which is first those data was, we used only for, let's say optimization of the service and for making it better for the users. But at a certain point, I think it was Google who introduced the idea that you can use this additional data that you have to capitalize on and by selling advertisement, and I think then this process has been started, right now it will be very difficult to stop it because they are those people who have a lot of data, they simply know what are the concerns of their users, what they need, they can shape those needs as well.

And they are also in control of the interfaces of the social media platforms themselves. So, they can kind of nudge people to share more data, right? So, they're abusing certain psychological needs to self-present and to build your social identity, through interaction with other people to simply accumulate more and more data, and right now there's a huge monopoly in data. Let's say control, and it will be very hard to break because once you, I mean, as other capital, if you have this kind of capital, you can actually multiply it. The difference, let's say the inequalities in data control are still growing.

Geoff: I think part of it also, Mike is it comes back to that scrapbook comment you made. A few minutes ago, where people are willing to share because it is convenient. If you think about 20 years ago, your children were at the school play and you took some pictures and you had to attach them to an email and then you sent them to grandma and auntie and uncle, and they replied, and it was just difficult to manage that where now you just put those pictures and they go out to everyone. So, that part is easy, and then of course the dopamine hit that comes back for all the likes, right. You know, so people are willing to share, to get, get that hit of all the likes and thumbs ups and, and all those kinds of things. So, part of the reason they're willing to share is just the ease and they feel that they're either not aware of the consequences of that, or they, you know, they feel that price is not too high for the convenience that's provided.

 Dr. Agnieszka Rychwalska:
Well, I think with sharing pictures with grandma, your children with grandma, this is different because you're talking about the ease of sharing information, and then this is one process. Like the communication has become a little bit easier because also you can engage in this one-to-many communication instead of one to one, right? With the email, you have to add every person's email address and so forth. So, there is a certain kind of new functionality. You can call it a new afford and those social media platforms that allow your communication to be faster and broader in a way. But there's a second point when you were talking about the lies, of course, the grandpa or grandma could have given you alike, or thumbs up for the picture, but this is different from, let's say a lot of the communication on social media, where you are posting stuff to build an image of yourself, to different people, right?
 
 I mean, you don't need to do that before your family whom you probably meet in offline spaces anyway. So, in social media, what you can do is to self-present to a lot of people and build your identity. Let's say the public person who you are, especially for the people whom you very meet, very rarely. So, in offline spaces they don't even have a chance to, let's say, to verify your social identity. It can't be actually totally false. Which it isn't usually, but it's usually a little bit cold, let's say, yep. But there is this huge need, normally we construct the self. So, who you really are by interacting with different people and building those different social roles with different, social contexts and you need this reaction of those people to let's say, modify your identity, right?
 
 Whom you want to be, you get an assessment, the judgments, and kind of feedback from your social context. It really easier either. It confirms who you want to be, or it’s kind of non verifies non-verification process when they say, okay, you are trying to be very cool, but you are not really and, in this way, I mean, it lets you get this feedback, you, this kind of reflected appraisal of you appears, and you try to modify your behaviour accordingly, right. To get the proper response.
 
 Mike : To get the cool, thumbs up, or whatever.

 Dr. Agnieszka Rychwalska:
Yeah. It's because cool. Thumbs up on Facebook. But you know, let's say on LinkedIn, you, you could be assessed a very professional and competent, person as well. Right? So, these are different identities that you might be using across different social networks, right?
 
 Geoff: Yeah. That's a very good point. I was going to bring up LinkedIn because I think on the personal sites like Facebook, particularly if you're interacting with your 150 or 200 friends. They know you both in real life and in your digital persona. So, it's in theory, one would mirror the other, but you're very right with LinkedIn. It's very different where you have 750 or a thousand professional contacts and how you present yourself and for many people, they would dare not comment on political issues or social issues or so on Facebook.

Henry : Completely different.

Geoff: On LinkedIn part of me whereas on Facebook, they might quite happily discuss that with their friends. So, the identity, that you present on LinkedIn is interesting and I often wonder if Microsoft, you can see, they continue to try to struggle to get their LinkedIn users to create a more, shall we say, accurate representation of themselves. Whereas most people I think are, are quite hesitant to do that.

Dr. Agnieszka Rychwalska: Well, you know, it depends. I mean, I think they still can get a lot of information and link it with the LinkedIn profiles by using, you know, just placing cookies whenever people, browse other sides and so forth. So, there's ways to let's say, join activity across different platforms for those platform operators. Maybe it's more difficult for Microsoft, but I think they also have their tools, but of course, for Google or Facebook that's no problem at all. But it's important to know that this is, I mean, trying to present yourself as a very competent profession, professional is not something, you know, to be ashamed of. I mean, this is normal, process when you are at work, in the physical space, you do the same thing.

Henry : Exactly.

 Dr. Agnieszka Rychwalska:
So, I mean, we are different people, I mean, each of us has a set of identities, which we perform in different social contexts, and it is normal for those identities or at least some of them to be in conflict with each other that is perfectly normal, and those conflicts are, I mean, as you grow up, you try to, you know, you learn how to separate your identities and you learn how to tackle those conflicts. But I think they're very clear, like, for example, in the case of, like teenager who is confronted at the same moment with their peers and their parents and the behaviour that they are trying to perform would be different for those different groups of social contexts and then this conflict that is within the person right. Within right. Kind of conflict identity is very, very kind of painful really.
 
 And it leads to some awkward situations. So, that's what you normally do in your offline, interaction, you separate those things, right? You try to have time with your peers and your parents, and later on, with your life, you have time with your colleagues for work, and what the workplace, you have time with your close friends, you have time with your family and you actually, and the social relations are also related. Those networks that you are entering and quitting are related to specific social context, broadly social context, right on social media. This is a little bit difficult, and it's also not entirely up to you but all it depends on the functionality of the platform and how well you can actually realize what your audience is at the particular time. There's the concept of, let's say the imagined audience, which is like who do you think is watching you at the certain moment? But this is, I mean, it, to a degree, it can be controlled, it can be perceived properly, but at times you are completely unaware of who is watching you.
 
 Mike : Well. Yeah, because you have no control over it, right. I mean, you really don't have any control over what you type into the interface on Facebook or any of these platforms and where it goes. And something that I want to ask Agnieszka just based on what you just said. So, one of the things that I've always kind of given my head a scratch about or been kind of focused on a little bit when it comes to these platforms, especially when it comes to this idea of centralizing the architecture, you know, so if you look at Facebook or you look at LinkedIn, or you look at all of these different platforms, they are in effect, you know, one kind of set of fields that creates a representation of your personality or your person, or your identity in the same format as everybody else on the platform.

Right. So, that it, and theoretically then to encourage you to say, share that information about how you are as a business person on LinkedIn versus as a social person on Facebook. But how is it, I mean, what effect does it have? Is it having on people and society where now what you've got is instead of people thinking of themselves as individuals, I'm going to be a movie star someday, I'm going to write the next great novel, I'm going to do whatever? And now the way that you define yourself on these platforms is, is pigeonholed, right? I mean, I have to put my Facebook profile together the same way you put yours together yet. We're different people. What effect does that have on people? We're all being forced into the same swim lane in order to represent ourselves online on these platforms.

Dr. Agnieszka Rychwalska: Well, first divides people into two groups, those that have an account, and those are of the grid. And of who don't have any knowledge and great points. I think, there is probably a difference between those groups, but also, so of course, I mean, the way people, especially the younger generation think about self-presentation, right? It is structured, by their early experience in social interactions and right now, the first, I think generation that had Facebook, from their teenage, it's growing up and entering adulthood and it is very interesting to notice that those people have already been, let's say brought up by their older peers who access social media, they in their life, and they're already learning and they are probably more, also flexible in their self-presentation and so on, but they have certain fears and certain behaviours that are simply an effect of the fact that they were using certain platforms with certain affordances, certain functional throughout their safe identity, formative years of teenagers. Right. And I think there's one very interesting, you know, concept that is right now popular is this so-called fake accounts. Right. Which is to me, it's right now, I think Instagram is promoting, the usage of multiple accounts.

Mike : Yeah, they actually that create a bunch of multiple accounts. If you want to create private, conversations, you're like, what?
 
 Dr. Agnieszka Rychwalska: Yeah. That's, I mean, this should be normal, but remember Facebook had this kind of policy that you know to keep your integrity or whatever that was called. You should have a single account under your real name or whatever, that was called, right now Instagram, which is, of course, part of Facebook is, or meta right now, of course, is promoting the use of multiple accounts, and I think the reason is very simple. They realize that people, when they have their one single account, they restrain, they have self-restrain in posting content. Right. So, because they have the younger generations have already realized that what you post there, it stays forever probably because anybody can copy and save it for later, even if you don't even delete it and that's viewed by a lot of people.
 
So, they're imagine audience right now is probably the whole world for this public sphere and your public persona, and so there this move by Instagram is actually a response to users already doing that without being prompted to. So, the younger users started creating this so-called fake accounts F Insta in comparison to do R Insta, which is your real account. What I want to point, this is not the fact that people are doing that because people were doing that offline for centuries and online also before social media, that was very normal to have multiple accounts, multiple identities, but of course the young people did not change in the fact that they need those multiple identities. They do just like the older generations, but they right now call them fake, which is for me very diagnostic.

Mike : Even though it is a real identity.

Dr. Agnieszka Rychwalska: Exactly.

Mike : Because they're posting through it, they're calling it a fake one.

Dr. Agnieszka Rychwalska: Yeah. And this is the identity where they can be in practice. They can be more true to themselves because they interact with a closer social circle. So, they're displaying the, you know, this kind of more, let's say real.

Mike : Geez. Yeah. Which one's more real.
 
 Dr. Agnieszka Rychwalska: Yeah. I mean, both are real this is true. But of course, the public persona is always more stripped of certain behaviours because you have to make it suitable for many audiences. Right. So, that let's say it's real that this is you as well, but this is the you who is, you know, self-restrained in comparison to these fake accounts where you can express more of yourself. So, this is an example of how the structuring of the way you self-present on social media actually affects people's thinking. Right. So, for those people this more let's say unrestrained identity is the fake one. Right. That's your name and this is an interesting aspect and I think we could probably, trace a lot of those kind of, let's say artifacts that are left in people's thinking because they were simply brought up on social media and we were probably finding them in the next years just doing research on this.

Henry : Agnieszka I have to ask because you seem like something that could have a pretty good idea. Why do you think that people seem so willing to share so much of their, identity and so much of themselves on social media platforms? Why is that?

Dr. Agnieszka Rychwalska: First of all, I mean, to construct your social identity. So, to be able to tell yourself even who you are, you need to interact with people and social media functionalities are in one way a great place for this and they have been like, that was the first intention of them. Right. But after a certain time when the platform operators realized that they are, let's say that they can capitalize and to this degree that we see right now on those data that people share, they want people to share more, right? So, there's a lot of ways in which they can prompt this way. I mean, this is a natural, psychological need to build your idea and through just performing, a certain role, a certain part of yourself, online and you do that in a physical space as well.

But of course in physical space, you have, let's say contextual cues that stop you from doing stuff that you don't have online and I think maybe that was an accidental, let's say a frozen accident that at first simply those platforms that was simple. But once the platform operators, the companies behind them noticed that this works, that this gives them more data. There is no incentive for them to actually make this social interaction online, more similar to the offline one where you actually apply some self-restraint and they have the tools to do that, right. They can right now shape the functionalities of the platform in such a way to make you share more. So, an example of that of course, is those this promotion of multiple accounts, right? Because for you, the users might feel safer because they think that this is only viewed by their closest relations, and they feel secure in sharing their information, but of course, the companies still have that and they can also link it between that different accounts.

Henry : Right. Of course.

Dr. Agnieszka Rychwalska: And this is the most important part. The linkage is between the data, which are them better profiling of users. So, that's one example, right? They can also I mean, there's this, they can also man manage, you know, manipulate the privacy options in such a way that you again, think that you are safe because you control the visibility of your content and so on that, you can actually engage, you know, heated discussions about some political issues and it's perfect because nobody sees, I mean, well, this is, I mean, maybe it's certain points you achieve certain, let's say social privacy. So, the privacy from the control you can have over your data that reaches your social context.

You do some have some because privacy futures to a degree allow you to do that. But you do not protect your institutional privacy, which is the privacy of your data. So, hiding your data from institutional actors, such as private companies or governments, right. Because they, I mean, when you share, they have it in the way and that's also one, let's say less direct way in which the companies can prompt more data sharing is by steering the narrative. Let's say the narratives about their own products in the public debate. An example of that is I think there was a very nice analysis in wired recently. Where the alpha idea was the problems. Facebook is not that its sensors or does not sensors enough of the content that appears on the Facebook, it's Facebook itself, the whole idea behind it.

Mike : Exactly.
 
 Dr. Agnieszka Rychwalska: Is the problem it's and the way they are constructing the public debate, like trying to push the conversation about Facebook too, those very singular problems or issues like should they, our Holocaust denial on their platform.
Right. This is a very particular issue and, but they are very happy to discuss it and to confront the regulators or the public about it to steer the debate and so forth just to move it away from the actual big problem, which is like, you know, we are using your data to make a lot of money and we don't care about the side effects. Right. So, this is another way in which having a lot of data and hold of capital that is just comes from the data to impact the public debate about those issues.

Henry : Thank you.
 
 Mike : One of the other things that I always kind of I struggle with a lot about with things like this kind of how these different platforms have evolved is that you have this idea that I think and I'm going to just guess, but I'm guessing that most people, when they go onto an Instagram or they go onto a Facebook for the very first time and they join some kind of a social solution, it's either because they've been invited by a friend, right. Oh, I've posted all my pictures on Instagram, join me here and you can see all my pictures or they've gone on, and they've created an account for themselves and usually it's to interact with a group. Okay. My kid's soccer team books, all of their stuff on Facebook.

So, I have to be on Facebook to check that out or whatever and so I see this intense going on to Facebook or to one of these social platforms at the outset being kind of fairly targeted and specific. I'm joining Instagram to look at the pictures from my buddy's bachelor party or whatever and then once you join, all of a sudden, you have this open, this timeline that just has all of these people, that you have no idea who they are or where the data is and stuff like this, and that just gets dropped in your lap and so, you're in this megaphone of 2 billion people and your intention is to relate with friends, but now all of a sudden the strangers are in there starting to control your narrative.

Dr. Agnieszka Rychwalska: Well, there's two aspects of it. I mean, the first one is that people differ in terms of whether they even have goals going on Facebook because there was a very old division between digital natives and digital immigrants. Right, but later it was actually collaborated on and there's different divisions right now. But one important aspect is that with the newer generations, they often do not have a goal entering social media. It's not like they use social media to do something or to achieve something. The social media is a place to be.
 
 Mike : Yeah. They have no choice. They're not going to have any friends if they're not there.
 
 Dr. Agnieszka Rychwalska: Yeah, it's like, I mean, you don't have a goal hanging out with your friends. You don't even realize that you have a certain social function that you are fulfilling. You just do it because this is a very basic social need. So, this is the first distinction, right? So, for the older generations really usually have the singular aim and this could be like connecting with your particle circle of friends. Right. And being able to communicate them, because for example, they're geographically very far from you, and this is simply a tool to make that connection possible, but later on, of course, you find out that you can also meet your other friends there, and then you can actually engage also in debate with strangers, and so on. So, this your kind of per are of different goals that you are fulfilling the is growing, but for the young people, I think they don't even think about it. That this way they just go there because they have to be like, it's a public forum where you are simply.

Mike : It's like a peer pressure thing. They used to call that when I was a kid.

Dr. Agnieszka Rychwalska: Yeah. But also, one thing that you mentioned is that usually, I mean, it can be your own action. So, you go there, but yourself to actually instigate this kind of connection between a certain group of friends, but also it can be what we call an externality, which is like. You are there because there is no other way to communicate with a certain group of friends, right? So, you are forced to use it by those externalities, from your peers and this can happen in a very different situation.


You can be because your workplace uses certain, communication software and you are forced to use it. You cannot just deny because you have to do that, you have to communicate. So, there's two aspects to that, first one is that you are there's difference, generational differences in how you actually are present in social media, and the second one is that in both cases, the external will always work and it work to advantage of those actors in this market who are already big, right? And this is normal for network externalities. The more people are there, the more useful is the service and that's why you are also brought in and like the first aspect of what you were saying and the second one is that at certain point you lose control over your audience. That's what you have mentioning’s. While you are using it for a particular purpose and then maybe for another purpose, but it's a certain point. You do not realize that you are entering a very public sphere where you interact with a lot of people and you are not even aware who is reading your content.
 
 Mike : Well, and this is very, okay, so now you're touching on something. I can ask you that I don't know if you're going to have an answer to this, but I want to put this out. There is a lot of what we're seeing now is a lot of extremities in the public debate. Okay. It's no, it doesn't seem like societies anymore are built on, you know, kind of getting together as Canadians to roll up our sleeves and do what's best for the country. It's all about it's me, I'm out for, what's good for me, it's you, you're out for, what's good for you, and all this stuff, and so I have some interesting conversations with people who, for example, are on both sides of the vaccination debate. Okay. And one of the things that I've always wondered about is I like to encourage people.

I say, okay, look, if you read something, you know, on a nose, social feed or you, whatever it was, and you have a certain opinion, I like I'm saying to them, make sure you do some research just to kind of, you know, look into the background of that information to make sure that, you know, it's not coming from some sensationalist place and it's real, and it's been well researched and things like this, but because you get into these algorithmically derived kind of tunnel vision, you know, if you're somebody who for example says, okay, I'm going to start researching whether this email I received or this post on Facebook that says, you know, the earth is flat is actually scientifically valid, and when you go out to start searching, because all these data bases are connected and all this other stuff, you could be on Google, you could be on Facebook, you could be, you know, you're getting your information from the same funnel.

How can you even rely on the places that you search providing you real, like kind of scientifically valid information? Like, it would just seem to me that we get, there's so much data being collected that is being refined to be so accurate on all of us that you just, you, if Facebook decides that I'm a flat earth, that's all I'm ever going to see you. So, how do I get out? How does the average human even consider breaking out of what the algorithm has created for them as their identity? Right. Like I'm stuck in this thing now because I'm a flat earth, or I searched it once and now that's all I see. Right. And so now my opinion becomes flat earth.

Dr. Agnieszka Rychwalska: Yeah, that's true. I mean, you change your opinion in effect of what you're seeing, but if I had an answer, I think I would be known selling it around because I think we don't have it yet.

Geoff: So, the answer, in my opinion and this is to a large degree of pipe dream because these platforms are incorporated in the United's states but to my mind, it comes down to government regulation. Governments regulate drugs, governments regulate what's on broadcast television, governments regulate a lot of things, and one of the things the governments could do, and perhaps a European government could take lead here and Facebook and others would have to follow but basically put some laws around what the algorithm is allowed to do. So, in the most extreme example, you could say, we require that the algorithm behave like Facebook did in 2009, where it is just a linear chronological feed, and it will just show you, you know, if you're part of some star Trek groups, it'll show you that, but you'll scroll and eventually it'll say, you've come to the end, come back another day.
 
 But that is an extreme example, less extreme example would be just saying that the algorithm is required to display alternate points of view, and in the United States in broadcast, this existed under there was a requirement under law where a doctrine where broadcasters were required to display multiple points of view. It was taken away, which led to the rise of rush limbo and Fox news and others, but it really, I think to expect these corporations run by these egomaniacal, quasi of libertarian tech pros, it's just impossible. But I do think, you know we elect governments to look after us, to look after our children, to look after society and to my mind, this is where this is where the solution comes, and I don't mean break up. I mean, you could break WhatsApp away from Facebook, you could order that, that Instagram be separate and so on, but even if Instagram wasn't part of Facebook, it would still be harmful to teenage girls. But if you were to basically put under law that you can't click like anymore or this type of thing, then to my mind, that's where you attack it is through regulation and the libertarians in the crowd might get very angry with this, but I it's my opinion that's where you come at.
 
 Dr. Agnieszka Rychwalska: It is very interesting, I was brought up in Europe, so, I'm not so against regulation and I think we do need regulation, but we have to take into account that regulation in general is developing slower than technology, and it will be very difficult to, you know, because we cannot hope that the big companies by themselves will kind of cut off their main financial sources, right? So, they the accumulation, and on the one hand, there is the way that strong regulation could force them to break, but they will try to overcome it in certain ways and there's an example of that, let's say in the regulations for cookies, right in Europe, you have to click on every website you visit accept or not accept cookies, and still the default option is for, to accept them.

So, it takes lot of effort to actually decline them. But it's very interesting how even, you know, browsers are trying to limit the cookie placement and the companies are trying to overcome and consistently an arms race. So, the regulations are trying to stop certain processes, but there's backlash from the companies, who try to, you know, bypass them in some way. So, that's one way to think about the problem, but I think coming from the perspective of complex systems, I'm very interested in self-organization. So, there was this kind of nice article recently say saying that Wikipedia, for example, was the last good place on the internet and I've been doing a lot of studies on Viki, mostly related to let's say the quality of content they post or how they coordinate their actions to produce good quality articles.

But what fascinates me is also there is a very strong social process going on of self-organization where, there's really no fake news in Wikipedia, and even if they appear there for a second, they get deleted and what happens with it? I mean, when I ask people why there are no fake news in Wikipedia, they usually say, well, it's Wikipedia, right? I mean, it's an encyclopedia. You cannot post fake stuff. But I mean, the fact that it is an encyclopedia is simply a decision of a group of people, right? It could be anything, the platform, you know, behind this system is simply a collaborative document, addition platform. Right, and just people decided this is a place where we don't post face big stuff and of course, once it gained popularity, there were a lot of people who were just for fun, trying to destroy those ideas and so forth.

And still, it is a huge problem to filter out stuff but they have, I mean, throughout the years develop policies, they develop rules, develop procedures and practices, and also design special roles within their community to tackle those problems and this is an example of social self-organization, right. Which answer a certain need. It is not perfect. Okay. I mean, it's far from perfect, but it does. I mean, it's one positive case of where you can actually, gather content that follows certain rules. Right. And it's just the people who are doing it. So, it's the power of the community to filter con to organize it and so forth and I think there was a big issue when they deleted Fox news from their reliable sources list and that this was a certain turmoil, but, you know, it survived.

I mean, I don't think Fox viewers, are not using Wikipedia from their own. I think they still are. Right. For sure and you know, so there's a kind of a top down and bottom down, bottom-up approach to regulated content, to regulating content on the internet in general, on different platforms and I think both could be VI and probably, combination of both would be very interesting. On the other hand, I also do hope, I mean, this is just a hope, that we will have some kind of disruptive technology that will take over, because if you think about it, there has been very little innovation in terms of social media, Facebook, I mean, it has evolved, but I mean, it's just a platform for posting your stuff and connecting with people, they've added videos. I don't know, like some kind of events, time and different stories, different stuff happens, but in principle, it just coping stuff or expanding on stuff that has already been there.

Mike : Absolutely.
 
 Dr. Agnieszka Rychwalska: We, haven't had a huge innovation in a long time, really those 20 years of maybe 15 from, from the appearance of social media that will just take over a lot of human attention and just give us a say an alternative way and I'm still hoping that maybe somewhere there's something growing brewing and somebody thinking of totally different ways of social interactions on the net and that maybe there is a chance in that and of course the only way for it to succeed, would be to be totally open because otherwise it's could be both by the big players. Right? Yeah. But if it's open, it cannot be both, and it can evolve in parallel and take over in a way.
 
 And that was the main, if you think about that, it was the main idea behind the greatest innovation. So, the internet itself and the world web, which was the idea that this is just a tool to build upon.
 
 Mike : That's right.
 
 Dr. Agnieszka Rychwalska: And we need it’s kind of technology. I mean, Facebook from the start and most of social media were closed from the beginning and that was the idea proprietary and as I said, this is just my hope. I'm not, counting on it too much, but I think, I mean, the technology is evolving and people are thinking, and I think there's a lot of creative potential and maybe in a couple of years we will not be having any of those troubles because simply a disruptive technology will come along.
 
 Mike : Well, it definitely seems, if I was to, if I was to make the call Agnes, it seems like it would be a lot easier, and probably a lot more reliable if there was a new, a new technology that would allow individuals right. To kind of define their identities and how they share them with other individuals in the digital context online, then trying to out how you regulate all of these disparate providers in different countries. Right. I mean, this is part of the challenge when we've been following, you know, doing a lot of stuff, trying to figure out digital identity. That's one of the projects that we're working on together and this idea of how do you even define digital identity and how do you then securely share that digital identity and stuff like this. I mean, it not only has technological implications, but it even has, you know, just regulatory language, political, all of these different, kind of things get come into play and you start to realize complicated. It is to try to define, to have a third-party definition of identity. Whereas if instead we could all create our own first-person identity and share that identity in different context with different people. That seems to be the easier way to try to get into some kind of balance with technology.

Dr. Agnieszka Rychwalska: I fully agree. I think, you know, trying to come to a single definition of identity and what should include and what should not include and especially how it should be linked across different identities, different social contexts, and so forth is very difficult. And but what we need is a kind of a more basic, technology behind sharing the identities, right? So, kind of like a protocol for communication, for social interaction, that would be very generic, but will allow you as a user to create your ID or identities, and communicate with other identities that is with other individuals or institutions. And you know, this kind of a really, low level technology would probably, allow also a lot of applications, both open public, and private to just to develop pro new products and new services, and that would be probably some kind of, disruptive technology, but of course, I mean, we have to also think about it that the big players will be trying to prevent any such, disruption in their processes because they already have a lot of capital, a lot of resources to just maintain the status of quo.

Henry : Exactly and as you realize, that's one of the reasons we have, built many one, and we're perfecting it now, because indeed it gives you the ability to have a connection, with someone. But each connection is unique and control of, so that's a huge, fundamental difference than what we're used to in the market today. Agnieszka, I want to thank you so much for taking the time to talk with us today. It was fascinating because we're learning things from a completely different perspective and honestly, a lot of the things that you spoke about today, I had never really of in that fashion. So, thank you so much for joining us.

 Dr. Agnieszka Rychwalska:
Thank you again for having me. And it was a great discussion for me as well.
 
 Geoff: Yes. Thank you. Fascinating.

Mike : Yeah. Agnieszka one of these days, you know, I think we're going to have to have another one because this topic just I've still got 15 questions to ask.
 
 Henry : I know.

Mike : You know, so maybe as we going to delve a little bit deeper into this, there's another conversation to have in the future, but thank you very much, for taking your time to join us today.
 
 Dr. Agnieszka Rychwalska: Okay. Thanks a lot.