Emma Beauxis: Beauty Bias: How AI Reinforces Societal Stereotypes

Emma Beauxis: Beauty Bias: How AI Reinforces Societal Stereotypes

Episode 8
44:12

This podcast episode explores the concept of “aesthetic capital” and how it shapes our lives in the age of social media. The host, Doortje Smithuijsen, interviews Dr. Emma Beauxis-Aussalet, an expert in ethical computing, to discuss how AI systems can perpetuate biases based on appearance. They delve into the example of AI-generated avatars that tend to reinforce stereotypical beauty standards. Dr. Beauxis also explains how our behavior can influence AI by shaping the data sets used to train these systems. The conversation concludes with a thought-provoking question: Does AI truly understand beauty?

Episode Transcript:

My name is Doortje Smithuijsen and in this podcast I will be investigating a static capital, because next to well the more known uh forms of capital like social capital and cultural capital, there's also something that is called aesthetic capital and it sort of well it it captures the way we look and the way these looks sort of define our class and our chances in society, and well due to the rise of social media and the increased visibility of our physical... appearance, this static capital is becoming increasingly determined for our social opportunities, so therefore I would really like to investigate this form of capital with all kinds of researchers, some are from the vu, some are from different areas of this society, and together in this podcast we will sort of investigate what is this static capital and how does it shape our society today? welcome to this next episode of the connected world. podcast in which we talk about the world of estatic capital and this episode I'm joined by Dr. Emma Beauxis-Aussalet, I'm hoping I'm pronounced your name correctly, yeah, almost, almost, almost perfect, almost perfect, okay, we'll work on that throughout the episode, I hope uh, she is an assistant professor of ethical computing at the here in Amsterdam, and with her multidisciplinary background in computer science. and design, she has been researching the systems that lie underneath transparent and controllable AI system, which is of course very relevant today, and for her achievements in this field, she was named one of the 100 brilliant women in AI ethics and that was already in 2021, welcome today Emma, thank you so much for being here, thank you, thank you for the kind introduction, you're welcome, i was wondering, i've been walking around with this question and i was hoping i could ask it finally to you, there was this hype going on, i think it was half a year ago or something uh when um people seem to collectively sort of uh realize what AI can do when it comes to avatars and there was this, i think it maybe was open AI or or or another system that you could upload like 10 profile pictures of yourself and then it would give you a few avatars of yourself that were made by AI, do you recall this? not this particular application, okay, so it was like this sort of instagram hype kind of thing, i saw a lot of images from from friends, and what really struck me was that the images that came out of this like sort of ai wizard thing were super sexist, so the girls all all of a sudden had like super big breasts, looked like Barbie, and the guys were... like vikings, you know, like super muscular type of men, so everyone seem to be really pushed into their stereotype. is this something that surprises you? not really. i mean it's not necessarily because AI is necessarily like this, it's not bound to produce this kind of cliches, but a lot of these systems are often trained from what is popular or what is like the most or shared the most um and I have no idea how they made the collected this ground truth data set for this particular application, but it could be an effect of what do we see on online platform and social media or other collection of images that um maybe there's a bigger mass of people who get excited for this kind of uh aesthetics and stereotypes are formed this way through the AI. because uh what is considered popular might exaggerate certain of those trades? yeah, exactly, like you say like, because maybe because these kinds of images are shared the most and clicked on the most, this is what AI will come up with when you ask them to well make me an AI to be more precise, it's it's like what the humans will make the AI will choose to come up with, and to use this kind of data sets and criteria, let's say who are the what are the? images that are the most clicked or the most shared or the most present in some existing media or collection, so initially i wouldn't say it's necessarily the AI that has to be like this, it's how we train it, it's like society forming the AI underneath it, you also studied a lot um, you did a lot of study on the way discrimination can sort of find its way into AI, can you tell me a little bit about this? yes, there are many things to say about it. um, so what are we talking with discriminations? i could of course give the example of the child care benefit scandal, yes, where this um, there was a... ai system used by the tax authority to identify claims for social benefits that were potential frauds and the system was used by people who don't necessarily understood what was flagged or why and the system was flagging people with two nationalities more often than others so that can be a racist bias and where did the bias find its way? in the system and first of course by the training data, the ground truth data, um, the in this particular case they used data for from other frauds where they were committed from abroad, i don't remember exactly which country, but some freadster um and make applications in the netherland while not living there and it was in a totally different uh use case than child care benefits. and that could explain why uh people with two nationalities were flagged more often um so that's a first place in which the bias can enter yeah um in cases in policing use cases uh there is also a self-fulfilling prophecy or feedback loop where human of bias um they might tend to be more harsh or more lenient on certain social groups depending on their own personal b'. sometimes very unconscious and so certain groups and policed more, punished more, sent to the police office more, and then in return they are more present in the database as more risky people, yeah, and that's what we've been seeing in different policing systems, um, of which um, um, fraud detection is one, yeah, and I feel like this is like sort of the... the the the most known effect when it comes to AI, i mean especially of course in the Netherlands with the touslag affere, this is something that comes to mind immediately, and I'm also thinking when it comes to like the the static component of AI, uh, you already uh mentioned that, well probably it's not necessarily so that uh the AI systems we use are sexist uh or racist because I mean they are systems, they don't have a bias of their own, the bias comes from the people that do the programming, but then I am wondering, for example, when you look at at at the this case from the the the avatars that came out quite uh, well like Barbie and Ken like, let's put it like that, um, I'm wondering, could you maybe also assume that this kind of, well, implicit discrimination, maybe you could say, like people are are uh, are seen as maybe more a risky person, and so they start to behave more riskies. so to say, like this vicious circle that you just explained. um, could this also be the case with when it comes to our looks? for example, people who look good in between quotation mark are maybe, well more likely to show themself online and create more of a, well, a standard than maybe people who don't look good in quantation marks again, and this creates like this own self-fulfilling prophecy. Yeah, I would agree with that observation, even without an AI, um, people are more happy with their eesthetics might publish more images of themselves online and people are less happy with their aesthetics, or people receive more likes with certain kinds of pictures, would publish more and more of those kinds of pictures because they have more success, so it does start with the human behavior indeed, and so it's not only the behavior of the... human would choose the data set, it's also the behavior of the people are in the data set or make the data set, and then there's also a phenomena with all these beauty filters that amplify that quite a lot. now it's uh, there are lots of tools that are accessible to make avatars, but also this beauty filter that would make your face um more beautiful according to certain standards, um... and so it would add to this phenomena of uh converging towards the same kind of stereotypes or aesthetic, because not only the people naturally have this aesthetics would put more pictures, but might also add the beauty filters, and people who have less of this aesthetic would keep on tending towards the specific aesthetics including with the beauty filters, and this is quite a maybe this is a little bit of a philosophical... question, but does AI know beauty? that is a philosophical question, and i think the under philosophical question is, does AI know beauty or something else? yeah, and this is something that uh, it's quite a hot topic, a lot of um utopists or or futurists or people have hopes that in the future, i can know uh, what is knowing uh with like higher analytical or cognitive ability and imagination and reformulating problems and all these things that we do quite naturally as human beings or even animals would do, um, this is not quite the case with the AI, we can't say it as a consciousness, we can't say it as an understanding, um, it can look like it has and the essence of its work is mimicking human intelligence or or natural intelligence, so it can be very good at mimicking it, but it's remains a mimic, yeah, there's no underlying knowledge uh, when you take the chat bots, ask them a super complicated question, they can come with an elaborate answer, but do they really understand what they're saying, yeah, they don't really no, for example, um, that's an example from Gary Marcus, he he says, um, you can tell the chatbot that you went to a meeting in your swimsuit, and the chat... but can continue the story like not necessarily noticing something is odd or not happening because meetings are associated with suits and swimming as well yeah yeah yeah and the AI doesn't know that uh wearing a swimsuit at a meeting is a very peculiar form of aesthetics well depends on your line of work of course yeah yes this is very true yeah yeah would you call AI uh democratic, some AI systems may be made in a democratic way, um, depends how you define like what's what would manifest democracy if people can vote, um, there's no election for what an AI should do, yeah, or who gets to program the AI sys, exactly, there's a lot of implicit uh votes and feedbacks that could be given if you like an image or not, but this in the user interface is very limited what humans are have the ability to do, yeah, like in the AI on social media for recommended systems, it's looks at how often do you watch um content from certain publisher or certain topics, if you watch watch them for a long time, short time uh, if you like them, if you share them, but it's very very limited afterwards, because um, you don't control in the first place what is the set of element? that you will start seeing and of course I mean there's this when it comes to for example uh quota for women for example you have this this sort of uh uh sentence that is always repeated you can't be what you can't see you can't uh come up with the idea as a woman to be a prime minister when you have never seen a female prime minister yes yes and I'm thinking a lot about uh the role that AI can play can play within this idea. you know this idea you you can't be what you can't see yes and and when it comes to well for example social media i feel like well maybe this is my own experience but like 90% of what we see a day is social media whether it is instagram or whatsapp or whatever you know you are spending a lot of time on your phone watching taking in everything that is being shown there and i'm wondering couldn't ai play also a a kind of like the quota role in this for example um showing more minorities for example or um more disabled people i'm i'm just thinking out loud here but but like to normalize the way we look at minorities wouldn't it be nice if AI would not always show us the norm but the like well the minorities make them more normal in our well visual sphere yeah that's a great question and i really like that question and that line of thought, um, so it's a matter of having representativity of different groups of different ways of being, and if the question on a technical sense, can ai uh be able to um, put that in practice, put that in place, technically, yes, you we can design systems with AI, um, that would have more diversity in the content that is provided, 'm um in the news industry they also try a diversification system in their recommender system that's those are like technical fields also that's a term diversification'. recommended systems, um, newspaper, search engines, also because of this problem of uh bias and self-fulfilling prophecy, eventually it gives a better user experience when you search for something and the system doesn't necessarily get it wrong if it gives you several possible answers, exactly, yeah, and if it gives you just one then you might get it wrong and you might get very frustrated, but then it also plays a social role for. representativity and for example, a couple of years ago, if you look for images of a CEO on Google search, yeah, you will see mostly white men in suits, yeah, and they've worked on that, oh really, yeah, but it seems to me, I don't know exactly how they do it, but it seems to me that it relies on human uh editorial activities, in tweaking the algorithms in including diversification mechanisms, technical solutions for this, but it might require humans to identify the topics where there are some issues and yeah, and of course then you come into this whole new realm of discussion like what are important topics, because for example CEO, yes, I mean, when you tell me this now, I feel like yes of course we should also see uh female CEOs, people of color being CEOs, but of course this goes on to like all layers of society, like not not everyone is looking. for a CEO, yes, you could also be looking for a plumber, and that could also be a woman or a person, a truck driver, you name it, so the the it's not sustainable to have human editorial activities on all the different topics and there can be mechanisms that are more generic uh, but it's not uh fail proof, no, and eventually um, eventually the system can't get it wrong as well, for example, there's been people trying the image generator for showing them uh images of the American history of uh the time where they were thanksgiving for example, yes this kind of historical time, and they ask for images of thanksgiving that are more diverse, and they ended up, the system ended up generating images of uh colored people in the outfits of white people back then with the social status of the people colonized america and this was terrible yeah it's it's really complicated to understand the underlying relationship between social groups and the realm of possible yeah and eventually we also need to give ' example of diversified situation to the AI, if the CEO example for for instance, if indeed in all the images that we could gather online, there is 95% of of white male insuits, very few women, very few people of color, um, then you need to collect more images and that's something that is a human activity too to make this curation of the data set, and we might tend to then collect more... images of women and eventually we end up in a binary, women and men, or the women might be stereotype themselves in the way they are represented or they represent themselves, so we might go from one core stereotype and diversify with one more profile, one more other profile, but this like the the problem of AI maybe sort of summed up like that it it's always looking for stereotypes. Yeah, that's that's something that um, both the AI and humans also i was just going to add like just like our brains actually, exactly, except that for for AI there are more chances that the details and the the this more subtle differences uh in aesthetics or other human traits um would get raised somehow um because everything that is a m ority or a few occurrences of something, AI systems tend to generalize, yeah, and they might disregard some variations that are considered as outliers or by chance an error or something, so it does amplify the process of forming stereotype, because it generalizes, but that's a tool that that's phenomena that we also do as human, exactly, except that we have. an understanding of things we does that what it's doing have we have a moral conscious so okay so maybe i don't see people in a wheelchair maybe every day but i do know that... you have to take them into account, yes, and this is the difference, i think, oh, i mean, you know this better than me of course, but between a human conscious and and AI conscious, is this the case, could you teach an AI to think like this? there are some reasonings that we can, we first have to generalize or human reasoning before we formalize it into a machine, so we can't really... escape the process of um generalizing and and losing subtlety, there are people work on making more machines and they either encode um principles for how to avoid immoral outputs in the eye system so after the fact with rules for example uh if i don't have a half half men women uh then i don't take this output yeah and by the way we're still not talking about non-binary people yeah um there are other ways that people try to teach um AI to be moral by giving them a data set of moral reasonings, if this and that happens then the correct uh solution should be that or according to the more principle and this culture or this way of think right i mean you have to like come up with code that yes represents this idea of morality but then you get into this discussion of, well, what is morality really? and this is something, I mean, if you look throughout the world, there's so many different, absolut, absolutely, some philosopher would even say that's a lost cause, yeah, because as soon as you uncode something, it has um, a limited way of functioning, always in the same way, yeah, and morality or complex questions full of dilemmas and Yes and miss a future situation that you haven't thought of in the firstly and I guess with morality you also do really come up on of this idea of a sort of human intuition on what is right and this while I'm saying this but now we're getting into a whole when you think like yeah but this is right according to like a human way of looking at things maybe AI would be like yeah well this is moral in our world or something. Yeah, yeah, at some point maybe the AI systems will say, well, yeah, I guess that is human morality, but for us it's different or something, yeah, well, for us, like you mean the AI would start saying for us, yeah, as if it has the conscios of its own class in society, yeah, well, we're not there yet, I hope, I could say um, I don't know, they are this, this uh, this situation. is not moral because it's against some other things that it has in its database, but it's uh, it's yeah, but it could make me come up, i mean i think when when uh when like normal people like me think of ai, i think um in a sort of unconscious way we always think of the voice of scarlet johansen in her uh of course you you probably saw also the news that open ai had to delete their sky voice because look, it sounded too much like johanson, but then you know there's this scene that uh the uh protagonist of her, the movie, he is, he sort of asks her like with how many people are you talking right now, and she says, yeah, i don't know, like 60 million or something, and this is quite a nice moment, i feel because it shows like the possibility of AI, i guess maybe AI could come up with more of a democratic view or morality in in. the sense that it could say, well uh, 80% of the people in the entire world feel like this is the moral thing to do in, well maybe, well as as opposed to the way we like mortal people tend to come up with moral decisions every day and we're like, well what feels good to me, could that be like a scenario? well that could be a scenario indeed where the majority of uh democratic way to solve it, the majority says the more thing to do is this, then that's what the machine would answer, yeah uh, but then how did you sample the opinions of people, and that's one thing, and the second thing is, what if situation in the remaining minority uh, it would still be more moral in certain cultural context to do differently, well and exactly does everyone has the same access to sort of... influence this AI to like make their vote or whatever, exactly, and then you also have the info war where activists take more space in the social media, there could be ways um not to... yeah, so like extreme right and extreme left would have a very way larger voice than other, other people, and then morality, i guess for me it's a bit like science, it keeps evolving, um, there was death penalty in France, my home country uh, and then we considered it immoral and we uh removed it in several steps and there is still des penalty in a lot of other countries, yeah, um, historically through time - we were doing duals also like you know uh yeah with a maybeard or will come up with this is quite a moral way to settle things go back in duels yeah exactly yeah and if you had money you could send a professional duel maker instead of you b yeah well well let's hope it doesn't come to that but who knows i was just wondering also like when you when you read and you follow the media right now it seems like um we will all be replaced with AI systems uh within now and 10 years and uh the funny thing with AI is when they replace us, they replace us as humans right, so they come up with voices or or even articles are when they are being written by AI, they do get a signature as being a human writer for example, i guess this is the way that we still feel sort of comfortable with these changes, but then when you look at AI and you see there's a lot of like bias going on within these systems, are you worried that well with this sort of ai takeover we will come into a world that is only like made up of uh very well let's say norm uh figures yeah - it's a great line of thought so yeah the things that they will replace us uh uh what what forw things um now it's already replacing a lot of the factory jobs like uh sorting uh fruits and vegetables to kick out the rotten ones and yeah final replacement but uh what if they replace us for activities that are dear to us or more important for - or mental wellbeing and or societal fabric for example some say uh i want AI to do my laundry and my dishes so that i can make my art and my creative thinking, i don't want AI to do my art and my creative thinking so that i will do my my laundry, exactly, so then these are very particular topics of the voice and the aesthetics, um, because this is how we express ourselves very intimately as humans, and what impact would it have on? or psychology and our society, if we get used to these elements of aesthetics and self-expression being generated by machine, would we start to mimic the machine ourselves? uh, would we be missing real human connection, um, more emotional vocabulary, more freedom of expression, um, on the voice in particular, I find that... uh deeply connected to to humanity, it's how we express our language, of course we have visual language, non-verbal language in our gesture and the clothes we wear, the way we do our hair and and so on, but in in the the verbal language is really really important uh for communicating between humans and and developing ourselves ' personally and towards others and you are wonder really deeply if interacting too much with language and voices that are artificial uh will make us lose a kind of communication sense like we would not um we would not understand emotions of other really well if suddenly we hear a real person talking when we're more used to hearing machines. Yeah, um, we might not know how to express our own personal emotions through all the subtlety of of voices, if we hear the voice of scarlett, johanson or specific voices in many different context, that don't transmit the real personality of that person behind the voice, and that's the kind of arguments that the in France we we start to hear from the professionals who are... are dubbing movies, that they're afraid that their job is going to be replaced by AI, and they insist that they have a a job and a role in society that is to... communicate human's emotions and perhaps deeper knowledge and deeper understanding than just the surface of of words and how do you look at this? how do you feel this? well personally i'm i feel a lot of discomfort uh that the personal voice of somebody be used uh for words that that particular person did not pronounce yeah um particularly concerned when this voice is used with without their... consent in my paranoia i could imagine why not having advertisement that suddenly sounds like your family and friends and to use the same kind of voices, it's also very dystopic idea. um, but it's funny because for example we have uh in the Netherlands um we had quite an interesting case on this subject with Aldi, it's a supermarket and they used to uh... always use the voice of quite a famous actor uh dutch actor and it was recently replaced with an AI voice um and i think it's so funny always when i hear this voice it's on the radio i feel like yeah i really do feel like i'm listening to a voice actress it's a female voice and it's such a it's a voice like it's your your loving aunt. hasn't seen you in a few months and is super happy to see you, that's the voice, and I feel like if they wouldn't have told me that it's AI, I would have thought it wasn't a very good actress. I see, yeah, I'm not sure exactly what the experience of humans would be with those voices, um, this is something that is kind of encharted, I don't know how much research we can find, and we probably have to rely on on our own personal experience while this... is being um put in practice, so I'm quite interested with so by your also your personal experience hearing that voice, at which point did you realize it was an AI? immediately, immediately, I knew, because I knew that the actor lost his job, I see, it was like a little bit of a news thing here in in in the Netherlands, so it it to me it's, I mean, and while I also have the same reservation as as you, mhm, actually still knowing that this is an AI voice, it still feels human to me, yeah, yeah, it's very interesting, do you recognize this feeling also? um, i kind of sheltered myself from, you're too far from experimenting with it, uh, i'm just repeating the thoughts that i hear from this um professionals that do the voice, and also, yeah, that's my personal concern, there's one thing i've i remember uh seeing from research about learning languages and voice that there are um uh some sounds that are very different depending on the languages like china, chinese has different sounds than french of course and as children we can hear these sound differences up until a certain age and then our brain starts to generalize as well um and we stop being able to hear the sub differences, and there is a grace period of two years between four and six, i think, where you can relearn the ability to recognize the subtleties in the sound, but it works, this relearning process works only if it's a human that speaks and it's not a pre-recorded voice, oh yeah, really, they didn't try with an AI system, i was just going to ask, it's also really like the human touch that plays a role with that, yeah, but then again, i mean people do learn, languages via duolingo etc. yeah, um, i don't know how well it works, yeah, um, but it's just an example that i'm pretty sure there would be implications in terms of uh um perceptions uh and emotional connections, but that might be an interplay between the voice and the text that is generated, and maybe they won't be, 'm maybe for the supermarket it's okay, maybe in movies it's not so much okay well yeah it also comes to like what what do you expect from media or or outlets or whatever because for example this uh voice um there's professional that dub the voice they say that to make a proper voice in a movie they have to understand the whole body yeah if a person is uh limping they would talk differently yeah but then again of course i not to disrespect these uh actors, but I would say that to if I were them and my job was on the line, and maybe I will be able to learn how to talk, I would so much better in in in like feeling what it's... like to have a lymp, you never know. um, to conclude a little bit, um, are you when when it comes to the way we look and the way we represent ourselves and the role that AI plays within this process, are you sometimes worried about, well beauty standards becoming very high or minorities being pushed away more. i have many reasons to be. read with the eye system, i tried not to uh see the good things that could happen through it, um, but i'm more worried about the humans than that do the AI or use the AI, interpret the AI than um what i is necessarily going to do, why is it? so um, because i believe that it's still possible to design. good AI systems and AI systems that are capable of uh evolving and that we have people there to keep improving it um but a lot of what we see uh of the social phenomena on the social media is the AI itself might not be the corporate it's also the whole user interface um that uh pushes for likes the more likes, the more popularity, um, that there is very little control given to the users to also take the lead in in uh searching for different kinds of content, that diversity is not addressed only by fixing the system that pushes information on others, it also needs the other way around, more ability for people to look for information that is different on their own, and if you make a search on... on Google, you would see - you can search for anything that you want uh, but you don't have this kind of search engine so easily on all social media, and even when it gives you some diversified results, uh, it can only diversify so much, exactly, um, and you don't have strong means to to tell the system, stop showing me things about uh football, or stop showing me things about uh, Leonardo dicaprio. because uh some uh of the advertisers would pay to have their content still pushed at you and there is also the the this law of majority and minority that it's uh without giving more control to people on how they can search and filter the content they see they might not be able to reach the exact corners in which they want to go yeah um do you feel like there's of awareness within the people program and AI um that they actually have this power to maybe take the biases and well at least try to change them. yes, i think awareness there is, not for, i don't know exactly, i couldn't tell you any statistics, what's the level of literacy in it, but i think a lot of the issue is again on the human organization, are you? want to decide to have a whole team in your company to spend the effort to check for bias and address them a team for more maintenance activities and a whole system for people to signal when something is uh potentially biased or harmful and the whole complexity of what it represents um also to arbitrate between uh exaggerated and fraudulous claims that something is harmful. when it's not, because that's used by activist online, we would flag somebody for abusive content when they don't like the person or it's not their political orientation, so it is complicated, it requires um manpower and investment and the economic part in it is absolutely not negligible and i think it makes the job of um uh AI developers that are sensitive to this topic, it makes their jobs and their life more difficult because - "it's a socio economical problem and the economical side is enormous, that's also why i think we not given so much control on what is shown on the social media, it's not just that of pure convenience, no exactly, do you feel like there's a lot of like lobbying going on to get this right? i'm don't know if you yeah, i'm not much an activist myself, i mean i wish i had several lives and i would do it." 'I feel like this is this is such an important topic, but then when I sort of uh envision um the sort of investors when it comes to tech and and and the people in silicon'. valley, i feel like this might not quite be the most sexy thing that big investors want to put money in. yes, i think the ethical risks are taken seriously by investor also because of the threat it would be on their investment. um, but it's still a a conversation that we need to have and to develop to to discuss really is it uh the kind of a... strategic investment that we want to do to make an AI with as little as necessary safeguards on the bias or what should there be actually this minimum? do you want more and regarding the economic implications? that's a conversation that we need to have and then we need to also have another conversation on what to do when something bad happens and something really bad might happen. um of AI that are misused and it can be misused uh in an underground way, maybe if there is um too much issue of uh um infringement of privacy or mimicking the voice of somebody and other systems that do deep fake of a really violent nature um it's not because a company will not do it that it won't happen no and even if a company is it like what would be the the the what would be a police for that? like you have AI systems that have been developed in some kind of small team of a couple of developers and that have been used for other people to haras each other and sometimes by the police yeah like um you can and it still happens even if it's not a commercial company definitely yeah well let's uh let's hope that we will uh eventually come into a world with more safety and also more uh well less biases within the AI programming. Thank you so much Emma for being here for this wonderful talk on this subject. Thank you. Thank you too for the nice conversation.

Meet your hosts:

Doortje Smithuijsen

Host

Type at least 1 character to search
Address:

De Boelelaan 1105
1081 HV Amsterdam



Vrije Universiteit Amsterdam logo Network Institute logo CLUE+ logo