• Nenhum resultado encontrado

An Individual's Ability to Correctly Identify a Deepfake

N/A
N/A
Protected

Academic year: 2023

Share "An Individual's Ability to Correctly Identify a Deepfake"

Copied!
38
0
0

Texto

(1)

1 0

A

N INDIVIDUAL

S ABILITY TO CORRECTLY IDENTIFY A DEEPFAKE

Joy Esperanza Deira

Dissertation proposal report presented as partial

requirement for obtaining the Master’s degree in

Information Management

(2)

2 NOVA Information Management School

Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa

A

N INDIVIDUAL

S ABILITY TO CORRECTLY IDENTIFY A DEEPFAKE

by

Joy Esperanza Deira

M20200105

Dissertation presented as partial requirement for obtaining the Master’s degree in Information Management, with a specialization in Marketing Intelligence

Advisor: Diana Orghian

October 2022

(3)

3

A

BSTRACT

With more than half of the planet on social media, it has become a prime source of information. Unfortunately, this also led to fake news spreading at an alarming speed, as shown during the 2016 U.S. elections. There is a new form of disinformation called deepfakes, altered videos through artificial intelligence which will make it impossible to discern fiction from reality. The consequences are severe, with false memories, polarization, and defamation. Current research focuses mostly on the dangers of deepfakes, or how to combat them with the use of several digital technologies. However, there is very little

research on how well the general public can identify a deepfake, and if our own biases would hinder us. This paper builds upon the field of research on deepfakes and fake news, showing that repeated exposure makes deepfakes more believable, but an analytical mindset has no effect. High social media usage could be a factor in better identifying deepfakes, which shows that individuals might have the proper repertoire to distinguish deepfakes.

K

EYWORDS

Deepfakes; Detection of Deepfakes; Fake News; Fluency; System 1 & System 2

(4)

4

INDEX

1. Introduction ... 5

2. Literature Review ... 11

2.1 Deep Fakes ... 11

2.2.1 Disinformation and Social Media ... 12

2.3 Cognitive Biases ... 14

2.4 Repeated Exposure ... 16

3. Methodology... 18

3.1 measures and Design ... 18

3.2 Video Selection ... 20

4. Results and Discussion ... 22

4.1 Participants ... 22

4.2 Results ... 22

4.2 Discussion ... 27

5. Conclusion ... 29

6. Limitations and Recommendations for future works ... 30

7. Bibliography ... 32

(5)

5

1. INTRODUCTION

Introduction

Nowadays social media is such a staple part of our lives, that it is difficult to imagine a world without it. With 4.6 billion users, more than half of the planet is using platforms such as Facebook, Whatsapp, YouTube, Instagram and many more. On average we spent two and half hours per day on social media (DataReportal, 2022).

With high usage, there has been a shift in what people use social media for.

Entertainment is not the only reason anymore, keeping up with others and getting information, either news stories or articles and videos have become one of the main drivers (DataReportal, 2022; Fletcher, 2018; Fletcher & Nielsen, 2017).

This dependency has also raised some concerns as it seems that the social media platforms do not have our best interest at heart and prioritize clicks, shares, ads and money (Orlowski, 2020; Warzel, 2018). The documentary ‘The Social Dilemma’ (Orlowski, 2020) confirms this by showing how their algorithms are optimized to keep us scrolling for as long as possible. The main message is that if you’re not paying for the product, you are the product. However, the documentary also touches upon another topic regarding social media platforms. Their objective of generating more money gives them an incentive to promote misleading and

polarizing media, by keeping people in their bubble they will get more revenue (Orlowski, 2020; Warzel, 2018).

As the algorithms of social media platforms continue to push for this kind of information, the credibility of facts starts to reduce (Citron & Chesney, 2019). Social media plays an important role in the rise and spread of fake news, which led to the Post-truth era or the “infocalypse” (Fletcher, 2018; Iacobucci et al., 2021; Warzel,

(6)

6 2018). People have become more doubtful of the integrity of mass media and

institutions (Chesney & Citron, 2019; Fraga-Lamas & Fernández-Caramés, 2020).

Some extremists even have accumulated a lack of trust information overall (Chesney

& Citron, 2019; Kietzmann et al., 2021; Vaccari & Chadwick, 2020). The most known example was the fake news going around on Facebook during the 2016 U.S.

elections (Citron & Chesney, 2019; Fletcher, 2018). Twitter even had to start doing quality checks on the tweets of Donald Trump due to the fake news he was posting regarding the 2020 U.S. Elections (Paul & Culliford, 2020). More recently, the covid 19 situation has caused a spike in the spread of fake news, for example, that the vaccine would also inject 5G (Apuke & Omar, 2020; Hassan & Barber, 2021;

Pennycook, McPhetres, et al., 2020).

However, there is a new type of fake news. It can show Obama offending Trump by calling him a dipshit even though it never happened (Peele, 2018). These altered videos and audio are called deepfakes (Fletcher, 2018; Kietzmann et al., 2021). Deepfakes add another level of difficulty when distinguishing something on their legitimacy. As deepfakes are developing at an accelerating rate, multiple authors have concluded that ‘seeing is no longer believing’ (Cochran & Napshin, 2021; Fletcher, 2018; Karnouskos, 2020; Kietzmann et al., 2020). Deepfakes have the ability to mass communicate disinformation around the globe (Chesney & Citron, 2019; Masood et al., 2021). They can be easily interpreted as credible due to the authenticity that is automatically associated with visual media (Hwang et al., 2021;

Vaccari & Chadwick, 2020). Research has shown that it is difficult for people to properly identify deepfakes (Cochran & Napshin, 2021; Groh et al., 2021; Iacobucci et al., 2021; Partadiredja et al., 2020; Rössler et al., 2018; Vaccari & Chadwick,

(7)

7 2020). It could even make us remember memories of events that did not happen (Frenda et al., 2013; Murphy & Flynn, 2021).

Current limitations in deepfake research

There has been a lot of research since they appeared on Reddit in 2017 (Fletcher, 2018). The research has shifted from explaining deepfakes (Chesney &

Citron, 2019; Kietzmann et al., 2020; Mirsky & Lee, 2020; Westerlund, 2019) to how to detect deepfakes using algorithms (Agarwal et al., 2020; Albahar & Almalki, 2019;

Malolan et al., 2020) or blockchain (Fraga-Lamas & Fernández-Caramés, 2020;

Hasan & Salah, 2019; Ki Chan et al., 2020). Social media platforms are feeling pressure to stop the spread of disinformation, including deepfakes, so for example, Facebook and Twitter are using their algorithms to try and identify deepfakes (Schoolov, 2019).

Although there is enough attention on how to detect deepfakes using

technology, there has been less literature on the detection of deepfakes by humans (Groh et al., 2021; Iacobucci et al., 2021; Vaccari & Chadwick, 2020). Although, for those experiments, algorithms were sometimes still involved (Groh et al., 2021). As for the research of Vaccari and Chadwick (2020), their respondents were not deceived by the Obama deepfake of Jordan Peele. A possible explanation for this can be that this video went viral when it was released in 2018, and people saw it online before. Cochran & Napshin (2021) focused more on the awareness and concerns about deepfakes, rather than the ability to identify deepfakes. Most of their respondents stated that they probably have been misled by a deepfake. Although the concern about the deepfakes was very high, the respondents would not reduce their video usage for identifying important issues. It is worrying that people are not willing to change their behaviour, even though they are not able to discern between content

(8)

8 that has been created by humans or by Artificial Intelligence (AI) around half of the time (Partadiredja et al., 2020).

Research is abundant about fake news, and whether individuals can correctly identify fake news (Halpern et al., 2019; Martel et al., 2020; Pennycook, 2018;

Pennycook, Bear, et al., 2020; Pennycook & Rand, 2019; Talwar et al., 2019).

Research in this area was most often in a political context (e.g. Halpern et al., 2019;

Martel et al., 2020), but it also entailed memories (Frenda et al., 2013) or attitudes (e.g. Martel et al., 2020). The groundwork that has been created in the field of fake news can be used as a baseline for research about deepfakes. As it is a recent topic, it would be important to first validate the main arguments for fake news, such as an analytical mindset is better at identifying fake news (Pennycook & Rand, 2019, 2021).

The field of fake news also shows that our human nature plays a big role, in the form of biases or mental states (e.g. Clayton et al., 2020; Garcia-Marques, Prada, et al., 2016). This is an aspect the field of deepfake has not yet touched upon, as their main focus was the combat with technology, or the raising of awareness and concerns about deepfakes (Cochran and Napshin, 2021, Fletcher, 2018). In these papers the cause of concerns about the dangers of deepfakes is high, but this has not yet reached the main public. The documentary “Coded Bias” (Kantayya, 2020) which came out in 2020, follows the trend of the social dilemma documentary in raising awareness. However, there is still a behaviour-action gap when it comes to fake information, as even if people are exposed to it on social media, they will not reduce their social media usage (Cochran & Napshin, 2021).

(9)

9 More research must be conducted on how well individuals can identify

deepfakes, what was found in research about fake news can be applied to

deepfakes, and if any cognitive biases influence how accurate we are in identifying deepfakes without any prior notification. That will be the main focus of this paper.

Research Question and Objectives

Based on the current limitations within the research field of deepfakes, the research question for this paper is the following:

How can human psychology, such as biases, affect the ability of an individual to correctly identify a deepfake?

The objectives of this research are:

1. Whether an individual can correctly identify deepfakes when exposed to them;

2. If the mindset can influence this ability to identify a deepfake;

3. Understand how distrust and social media usage could influence their perception and accuracy;

4. How biases and other aspects of the human mind can be leveraged to make individuals more effective against disinformation, in specific, deepfakes.

Contribution of this paper

As deepfakes are developing faster than their detection counterparts, it is most likely that anyone will encounter a deepfake without knowing, and chances are low they will recognize that it is not real (Cochran & Napshin, 2021; Partadiredja et al., 2020). This research will help understand whether an individual would be able to

(10)

10 identify a deepfake on his own which is still quite an underdeveloped research area.

Furthermore, measures of different disciplines will be used for this experiment that can show the impact of concepts such as mindset and repeated exposure to another form of disinformation, not only fake news.

It is of importance that deepfakes are discussed more widely, as well as the risks and consequences (Chesney & Citron, 2019; Citron & Chesney, 2019; Fletcher, 2018). Especially the impact of deepfakes is often mistaken, but it can even create false memories (Murphy & Flynn, 2021). Furthermore, it is very difficult for people to change their opinion after certain misconceptions have been proven wrong,

especially if they were negative (Thorson, 2016), which has huge implications for politics, but even defamation of anyone known to the public, and unknown people.

Therefore we need all the support and research that can help prevent this.

Methodology

This paper is structured as follows, the next chapter discusses the literature review about deepfakes, disinformation, cognitive biases and repeated exposure.

The hypotheses and research model will also be introduced here. In chapter 3 we discuss the methodology, how the survey was set up, which independent and

dependent variables, and which videos were used for the experiment. The results will be discussed in chapter 4, which includes the main findings, as well as the

discussion. Chapter 5 contains the conclusion, while chapter 6 discusses the limitations of the findings and recommendations for future research.

(11)

11

2. LITERATURE REVIEW

2.1DEEP FAKES

The word ‘deepfake’ originated from the Reddit username who posted the initial videos of female celebrities in pornographic videos on the platform together with the code. It is a combination of the words deep learning and fakes (Kietzmann et al., 2020). Deepfakes are part of Artificial Intelligence, within the branch of Machine Learning (ML). The definitions of Fletcher (2018) and Kietzmann et al. (2020) regarding will be used to explain deepfakes as they align with the nature of this research that does not focus solely on the technical specifications.

Deepfakes are deep learning algorithms, which means they will learn how to think and classify on their own. If you give a deep learning algorithm a mass dataset with pictures labelled as cats and dogs, the algorithm will process those images to identify the most relevant features to distinguish the pictures of a dog from the

pictures of a cat. This results in a highly effective prediction algorithm stating whether a new picture in its dataset is a cat or a dog. With generative adversarial networks (GANs) it can be even taken a step further where two neural networks are run against each other. One network is discriminating dogs from cats, so reals from counterfeits, while the other network seeks to beat the discriminator by generating data (Fletcher, 2018).

Kietzmann et al. (2020) identified different types of deepfakes: photo, audio, video, and video & audio. The focus of this research will be on video and audio.

There are different types of deepfakes, such as lip-syncing, where only the movements of the mouth and the words spoken are changed. Then there is face-

(12)

12 swapping, replacing the face of someone with the face of someone else. Lastly, full- body puppetry entails the switch of somebody’s movements to another body.

Deepfakes can also be used in advertising. Advertising is shifting to something called ‘synthetic advertising’, which entails ads that are generated or edited through the artificial and automatic production and modification of data (Campbell et al., 2021). This gives way to hyper-personalization. Synthetic ads can create models matching your ethnicity and height, wearing products that are similar to what you bought previously. It could even be your own body on which the products can be modelled. However, there is an ongoing discussion between hyper-personalization and privacy (Campbell et al., 2021).

2.2.1 DISINFORMATION AND SOCIAL MEDIA

When information is deceiving, but it has no intention to, it is called misinformation. Disinformation, however, has the intention to deceive (Wardle &

Derakhshan, 2017). Nowadays disinformation is mostly spread through social media online, which also includes deepfakes (Fletcher, 2018). It is extremely easy to create and access information on social media. However, the main issue is that it has no verification or fact-checking of sources (Karnouskos, 2020; Lazer et al., 2018). This has given rise to the current most famous disinformation, fake news (Fraga-Lamas &

Fernández-Caramés, 2020; Kasra et al., 2018; Pennycook, 2018; Pennycook &

Rand, 2021).

As mentioned in the introduction, this is leading to the infocalypse, as often one’s filter bubble is only amplified and overall scepticism is increased (Chesney &

Citron, 2019; Fraga-Lamas & Fernández-Caramés, 2020; Halpern et al., 2019;

Kietzmann et al., 2021; Vaccari & Chadwick, 2020; Van Duyn & Collier, 2019). This is

(13)

13 especially harming the trust in traditional media, journalists need to spend even more time verifying their sources, as the news sector is declining and more dependent than ever on the public (Karnouskos, 2020). Research regarding trust in media sources and disinformation shows different results. For fake news, Halpern et al. (2019) found that trust in traditional media had no significant impact on credibility. Vaccari &

Chadwick (2020) were able to show that the less likely an individual trusts the news, the more likely they are fooled by a deepfake. Deepfakes are seen as a new type of disinformation because someone intentionally alters the voice, face or even complete body of an individual in the video to make them say or do something they have not done in reality (Dobber et al., 2021; Hwang et al., 2021). Deepfakes can amplify the spread of disinformation due to their realistic nature, making them efficient and complicated to identify (Cochran & Napshin, 2021; Dobber et al., 2021; Iacobucci et al., 2021; Karnouskos, 2020; Vaccari & Chadwick, 2020). Therefore, the first

hypothesis is H1: The less trust in traditional media (a) and the more trust in social media (b) an individual has, the more likely they will fall for a deepfake.

Our dependency and frequent use of social media go hand in hand with encountering and believing disinformation (Cochran & Napshin, 2021; Halpern et al., 2019; Talwar et al., 2019; Vaccari & Chadwick, 2020). The initial assumption would be that frequent use of social media leads to more exposure to disinformation and consequently the belief those stories are accurate. There has been research that people are more likely to trust famous people and acquaintances they follow on social media (Halpern et al., 2019; Kasra et al., 2018; Talwar et al., 2019). Halpern et al. (2019) research found the opposite, people were less likely to believe fake news the more they used social media. There is a higher sense of awareness of the disinformation online since Trump was involved with the U.S. elections (Citron &

(14)

14 Chesney, 2019; Fletcher, 2018; Paul & Culliford, 2020) and documentaries talking about the dangers of social media and algorithms (Kantayya, 2020; Orlowski, 2020).

As it has become more likely that people will have encountered a deepfake online, the next hypothesis is H2: High social media usage will increase the likelihood of correctly identifying a deepfake.

However, when looking at the sharing aspect of social media, there is more risk of being fooled (Halpern et al., 2019; Talwar et al., 2019). Reasons to share something online can include the sharing aspect of social media, Fear of Missing Out (FOMO), and trust in the people we follow (Halpern et al., 2019; Talwar et al., 2019).

Part of it can also be laziness to check for authenticity (Kasra et al., 2018; Talwar et al., 2019). On the other hand, a willingness to share this with others shows belief in the information (Halpern et al., 2019; Iacobucci et al., 2021; Talwar et al., 2019). This makes H3: The more willing the individual is to share on social media, the less likely they are to identify a deepfake.

2.3 COGNITIVE BIASES

Humans are always perceptive to cognitive biases, which have the ability to fool us. Our brain is divided into what are called System 1 and System 2. System 1 is our automatic pilot, our initial reaction and subconscious, e.g., how to tie your

shoelace and catch a ball if it is thrown at you. System 2 is more rational and used when we need to think about things. An example can be which bus to take. Often our System 1 overrules our System 2 in the initial stage, even though System 2 often wins as time passes. However, System 1 is more prone to biases (Kahneman, 2011).

This often leads to people being overconfident in assessing their own behaviour, often due to the fact that things seem very logical afterwards, not before. This is also

(15)

15 called a hindsight bias (Kahneman, 2011). Other biases such as confirmation and self-serving bias are also likely to intervene with the identification of deepfakes (Wood & Sanders, 2020). Despite this, individuals that have a more rational mindset, by engaging system 2, are better equipped at making sound judgements when engaging with disinformation (Pennycook & Rand, 2020).

Within the academic field, there is a divide on whether System 1 or System 2 are better for identifying fake news, and which one makes us believe them more.

Classical reasoning and Motivated Cognition have been researched sufficiently. The Classical reasoning account argues that focusing on our rational and analytical thinking will help identify and uncover fake news (Pennycook & Rand, 2019b; Ross et al., 2021). Otherwise, it is more likely that the individual will be fooled by fake news or disinformation (Martel et al., 2020) due to a lack of thinking (Pennycook & Rand, 2019b). When a person is given more time, they will correct the initial intuitive mistakes that they make (Bago et al., 2020). Motivated Cognition states that we should focus on our emotional responses, as our analytical response will make us more likely to believe the fake news that is, for example, in line with our political view (Kahan, 2017; Kahan, 2013; Kahan et al., 2012). If someone wants to believe

something is authentic, they will find ways to justify their view (Kasra et al., 2018).

Multiple researchers (Martel et al., 2020; Ross et al., 2021) have compared both sides and concluded that for specific cases emotional reasoning might work better in identifying fake news, but overall rational and analytic thinking makes individuals more sceptical and alert, regardless of political preference. It is even associated with lower trust in fake news sources (Pennycook & Rand, 2019ª;

Pennycook & Rand, 2019b). General research about dual-process theories of judgment suggests that analytical thinking outperforms emotional thinking (Evans,

(16)

16 2003; Stanovich, 2004), and trusting your gut feelings is associated with believing in conspiracy theories and falsehoods in science and politics (Garrett & Weeks, 2017).

As there is no research yet regarding deepfakes and state of mind, H4 states that the more analytical mindset, the better the individual is at identifying a deepfake.

2.4 REPEATED EXPOSURE

As mentioned our higher dependency on social media and decreasing trust in traditional media sources are causing polarization. Next to that, more often than not those polarizing events involve fake news and the sharing of disinformation. The global pandemic is a good example of this, where a lot of disinformation was spread about the vaccine and a lot of conspiracy theories gained support (Apuke & Omar, 2020; Hassan & Barber, 2021; Pennycook, McPhetres, et al., 2020). There are more and more people changing their opinions and believing more extreme or nonsensical ideas.

One of the reasons why these stories seem more and more believable is due to repeated exposure to these stories, which causes fluency. Fluency makes things feel more familiar after repeated exposure as the stimulus becomes easier to process (Jacoby et al., 1989; Jacoby & Whitehouse, 1989). This happens because the

individual starts to rely on the non-analytical processing with easily accessible heuristic clues, rather than using analytical processing of considering all arguments in forming an attitude. The analytical processing of system 2 of our brain decreases when there is repeated exposure (Claypool et al., 2015; Garcia-Marques, Prada, et al., 2016; Garcia-Marques & Mackie, 2007). It can even mistakenly create the feeling

(17)

17 of liking and attraction in new situations due to familiarity (Bornstein & D’Agostino, 1994).

There are multiple types of fluency that have been identified in research (Alter

& Oppenheimer, 2009). Perceptual fluency happens when individuals experience more ease in identifying the physical features of a stimulus. For example, having a high contrast between a word and the background makes it easier to process, which increases its fluency. Conceptual fluency is related to the meaning of a stimulus. The semantic meaning of the word ‘’butter’’ will come faster when it comes after ‘’bread’

than if it was after ‘’book’’. The last, memory-based fluency happens when there is ease when encoding or retrieval. When studying for school tests for your foreign language, it is easier to learn in small batches than in long lists of words (Alter &

Oppenheimer, 2009; Claypool et al., 2015; Schwarz et al., 1991). All these types of fluencies can make the information seem more believable and truthful.

Research has supported this illusionary truth effect when repeated stimuli are perceived as more truthful than new stimuli (Dechene et al., 2010; Garcia-Marques, Prada, et al., 2016; Garcia-Marques, Silva, et al., 2016; Hassan & Barber, 2021).

Within the research of fake news and disinformation, Pennycook (2018) finds evidence that repeated exposure increases perceived accuracy. This is in line with the other research about fluency (Dechene et al., 2010; Garcia-Marques, Prada, et al., 2016; Hassan & Barber, 2021). For this reason, H5 states that being repeatedly exposed to a deepfake will make it more believable.

(18)

18

3. METHODOLOGY

3.1 MEASURES AND DESIGN

The experiment was created in the form of a survey with a mix of deepfake and real videos. There was a single factor between participants for mindset and a single factor within participants for fluency. Participants were informed which videos were fake at the end of the survey. The demographic questions were regarding gender, age, education and nationality.Table 1 displays an overview of the scale items.

Table 1

Measurement scale and items

Constructs Items Measurement Items Reference Reference

Cognitive Reflection Test

(CRT)

CRT 1 If a baseball and a bat cost $1.10 together, and the bat costs $1.00 more than the ball, how much does the ball cost?

Frederick (2005) CRT 2 If it takes 5 machines 5 minutes to make 5 widgets, how long would

it take 100 machines to make 100 widgets?

CRT 3

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how

long would it take for the patch to cover half of the lake?

Non- mathematical

CRT (CRT)

CRT 4 If you’re running a race and you pass the person in second place, what place are you in?

Thompson &

Oppenheimer (2016) CRT 5 A farmer had 15 sheep and all but 8 died. How many are left?

CRT 6 Emily’s father has three daughters. The first two are named April and May. What is the third daughter’s name?

Detection of Deepfakes (DD)

DD 1 Have you seen or heard about this story before?

Pennycook &

Rand (2019b) DD 2 To the best of your knowledge, how accurate is the claim in the

above headline?

DD 3 Would you consider sharing this story online?

Frequency of Social Media Use

(SM) SM 1 How much time do you spend on a social media platform? Halpern (2019) News in Social

Media (SM) SM 2 How often do you see news on social media?

Perceived Media Credibility (MC)

MC1 I find most news posted on social networks believable

Talwar et al.

(2019) MC2 I find most news posted on social networks accurate

MC3 I find most news posted on social networks trustworthy MC4 I find most news posted on social networks complete

(19)

19 3.1.1 Analytical mindset

The goal with the single factor between participants was to prime them in either an analytical or emotional mindset, so they would use this to assess the videos (Iacobucci et al., 2021; Pennycook & Rand, 2020). Following previous research (Martel et al., 2020; Pennycook & Rand, 2019b), a robust and tested measurement for the analytical mindset of individuals called the Cognitive Reflection Test (CRT) will be used (Bialek & Pennycook, 2018). CRT performance should be negatively

correlated with believing false or misleading news. Next to the original CRT questions (Pennycook et al., 2015; Pennycook & Rand, 2019b, 2020), the non-mathematical questions of Thompson & Oppenheimer (2016) were added. As for the emotional condition, there were an equal amount of easy trivia questions that the participant should not have to put a lot of thought and effort into.

3.1.2 Repeated Exposure

The single factor within-participant was designed to create fluency and to understand the impact of repetition and accuracy. For every participant, out of the 8 videos, there will be 4 videos repeated, generated at random. All the participants will see 4 out of the 8 videos randomly without any questions. After a filler question, all 8 videos are presented with questions in random order.

3.1.3 Dependent Variables

The participant was exposed to eight videos, four deepfakes and four real videos, and two of each category were repeated. To measure the perceived accuracy of the video participants were asked after every video how accurate the claim was in

(20)

20 their opinion based on a 4-scale (1=Not accurate at all, 4=Completely accurate).

Willingness to share was a 3 scale question about how likely they would be to share the video online.

3.1.4 Social Media Usage

This was measured with 2 questions from Halpern et al. (2019), “How much time do you spend on social media?” and “How often do you see news on social media?”.

The answers could be answered on a 6-scale with 1= Do not use social networks and 6=6 hours or more per day.

3.1.5 Trust in Traditional and Social Media

The scales of Talwar et al. (2019) and (Fang et al., 2016) were adjusted to fit both traditional and social media. “I find most news posted on social/traditional media trustworthy”, “I find most news posted on social/traditional media accurate” and “I find most news posted on social/traditional media believable” were all 5 scale questions with 1= completely disagree and 5= completely agree

3.2VIDEO SELECTION

Following the approach of other researchers (Lee et al., 2021; Vaccari &

Chadwick, 2020), the deepfake videos were taken off Youtube. Due to the limited of deepfakes on Youtube without watermarks, the deepfakes focus on politicians or actors. Similar videos related to politics or celebrities were found that could be perceived as unlikely to happen. One sentence was put above the videos as a description and the videos were all cut to around 15 seconds. The videos can be found here: link. Image 1 shows an example. The deepfake videos include:

(21)

21

 Chris Pratt deep faked into the first Indiana Jones movie as “Indiana Jones”

 Salvador Dali talking about life and death in colour - created by the Dalí

museum in the United States with an algorithm from scratch and a voice actor

 Vladimir Putin holding a speech about the American Democracy – created for

a United States anti-corruption campaign by Represent.Us

 President Trump claims that AIDS has been eradicated – created by a French charity Solidarité Sida

As for the real videos:

 A 100-year-old French lady confuses ex-bondskanselier Merkel as the wife of

President Macron

 President Trudeau demands an apology from the Pope for the harm done to

the indigenous people, based on the children's graves found at catholic schools

 Kim Kardashian talking about her social media usage in the Vogue 70

questions interview – a deepfake was created of this scene but was not able to be used due to a watermark

 Steve Carrell, known from the show the Office is the voice actor of Gru in Despicable Me

(22)

22

4. RESULTS AND DISCUSSION

4.1 PARTICIPANTS

In total 188 participants took part in the study. Out of these, there were 121 completed responses. The final sample of N=121 consists of 54 males and 67 females. As figure 1 shows 86% of the sample is between 18 and 34 years old.

Based on figure 2, 52% of the sample has a master’s degree. The majority is either a full-time employee or a student. So, the dataset is skewed toward young people with a high level of education that is either working or studying.

Figure 1. Age distribution Figure 2. Education split

4.2RESULTS

One factor that could influence the identification of deepfake is having low trust in traditional media (H1a) and high trust in social media (H1b). Dummy variables were created for these hypotheses, with 0= low trust and 1= high trust.

For H1a the independent variable is the trust in traditional media and the dependent variable is the perceived video accuracy of false videos. An independent T-test shows that there was no significant effect (t=-1.433, df=119, p=0.155) between having low and high trust in traditional media. However, the results in figure 2 do

(23)

23 show the expected trend of people that have lower trust in traditional media

perceived deepfakes as more accurate. With a bigger sample size was bigger, it can be predicted that these results would become significant. Although H1a is currently not supported.

Table 2

Descriptive of Trust in Traditional media

Group N Mean SD SE

Traditional Media Trust high 97 1.946 0.555 0.056 low 24 2.125 0.521 0.106

Figure 3. Interaction between Traditional Media Trust and Perceived Accuracy of False Videos

For H1b, the independent variable is the trust in social media and the dependent variable is the perceived video accuracy of false videos. Here the

independent T-test should show an opposite effect compared to H1a. Indeed, figure 3 shows that participants with higher trust in social media rated the deepfakes as more accurate, but the result was not significant (t=0.934,df=119, p=0.352).

Therefore, H1b is also not supported.

Table 3

Descriptive of Trust in Social Media

Group N Mean SD SE

Social Media Trust high 58 2.030 0.568 0.075 low 63 1.937 0.535 0.067

(24)

24

Figure 4. Interaction between Social Media Trust and Perceived Accuracy of False Videos

High social media usage should make it easier to identify a deepfake. For this hypothesis, the independent variable is social media usage while the dependent variable is perceived accuracy. H2 will be checked with a correlation matrix, where a positive relation with false perceived accuracy and a negative relation with true perceived accuracy videos is expected.

There is no significant correlation between the perceived accuracy of false videos and social media usage (r=0.038, p=0.678). However, there is a marginal statistical significance between the perceived accuracy of true videos and social media usage (r=0.176, p=0.053), as shown in table 4. It is a weak positive

correlation, so when participants used more social media more often, they also rate the truthful videos as more accurate. Therefore, H2 is only partially supported.

Table 4

Correlation matrix of perceived accuracy of false and true videos with social media usage

Variable Perceived Accuracy

False Videos

Perceived Accuracy True Videos

Social Media Usage 1. Perceived Accuracy

False Videos Pearson's r

p-value

2. Perceived Accuracy True

Videos Pearson's r 0.236

p-value 0.009

3. Social Media Usage Pearson's r 0.038 0.176

p-value 0.678 0.053

(25)

25 To understand if an analytical mindset aided in identifying a deepfake, a 2 (Objective Video Truthfulness: True, False) x 2 (Condition: Analytical, Emotional) as the between-subject factor repeated measure ANOVA was conducted for H3. With this approach, the independent variables are the objective video truthfulness, whether it is a deepfake or not, and the type of mindset: analytical or emotional (within and between subjects). The dependent variable is the perceived video

accuracy, so how accurate the participant rated the video in their opinion. There was a main effect found for objective video truthfulness (F(1,119)=182.35, p<.001), showing that participants judged deepfakes less accurate (M=1.981, SD=0.551) than real videos (M=2.806, SD=0.529). The participants choose “not entirely accurate” on the scale for deepfakes the most, while for truthful videos it was closer to “mostly accurate”.

However, there was no interaction effect for the condition (F(1,119)=0.601, p=0.440), so participants didn’t judge the videos different while having a more

analytical or emotional mindset. This is contradictory to what previous research found (Martel et al., 2020; Pennycook & Rand, 2019). Therefore, H3 is not supported.

H4 focuses on willingness to share and how an increase in this will reduce the ability to identify a deepfake. The independent variable here is a willingness to share and the dependent variables are perceived accuracy of false videos and perceived accuracy of true videos. There is no significant correlation between the perceived accuracy of false videos and willingness to share (r=-0.081, p=0.374). Interestingly, there is a marginal statistical significance between the perceived accuracy of true videos and willingness to share (r=-0.178, p=0.051). A positive correlation would be expected; however, it is a negative one. Participants deemed the true videos less

(26)

26 accurate the more willing they were to share the videos. The relationship of the correlation is quite weak. In conclusion, H4 is not supported.

Table 5

Correlation matrix between perceived accuracy false and true videos and social media usage

Variable

Perceived Accuracy False

Videos

Perceived Accuracy True

Videos

Willingness to Share 1. Perceived Accuracy

False Videos Pearson's r

p-value

2. Perceived Accuracy True

Videos Pearson's r 0.236

p-value 0.009

3. Willingness to share Pearson's r -0.081 -0.178

p-value 0.374 0.051

H5 tested if repeated exposure influences people into believing deepfakes are more accurate. The independent variables are Objective Video Truthfulness and Repeated Exposure. The dependent variable is the Perceived Video Accuracy. A 2x2 ANOVA was done, which shows there was a main effect on objective video

truthfulness, F(1,120)= 184.599, p= <0.001. The truthful videos (M=2.806; SD=0.529) were perceived as significantly more accurate than the deepfakes (M=1.981;

SD=0.551) by participants.

There is also a statistically significant effect for exposure, F(1,120)=5.084, p=

0.026 in the expected direction. Being exposed to a deepfake for the 2nd time (M=2.083; SD=0.781) caused them to judge them more accurate than the video participants saw for the first time (M=1.880; SD=0.652). The real videos show the same pattern between new (M=2.777; SD=0.701) and repeated (M=2.835;

SD=0.699). The interaction effect between perceived video accuracy and exposure was not significant, F(1,120)=1.418, p= 0.236. When the video is repeated, the

(27)

27 accuracy of the video has no impact on how the video is perceived by the participant.

Overall, H5 is supported.

Table 6

Descriptive of Video Truthfulness and Repeated Exposure Video

truthfulness Exposure Mean SD N True Repeated 2.835 0.699 121

New 2.777 0.701 121

False Repeated 2.083 0.781 121

New 1.880 0.652 121

4.2DISCUSSION

Human biases have an impact on how individuals identify a deepfake, but it is not as much as was expected. The participants in this research were able to discern that something was off with the deepfakes, as they gave it a significantly lower score than the true videos. It was unexpected that the mindset of an individual had no impact on their ability a deepfake, as this has been proven multiple times with fake news (Martel et al., 2020; Pennycook & Rand, 2019). One explanation for this is the fact that a deepfake and moving image are different in terms of analysis than an edited image. Or certain stories are so outrageous that people are unlikely to believe them.

The trust people have in traditional and social media seems to possibly have an impact. People who were more sceptical towards traditional media were more likely to perceive deepfakes as more accurate than people with high trust in traditional media. An opposite effect was shown with trust in social media, even though not significant. These findings align with other research (Vaccari & Chadwick, 2020).

(28)

28 The participants were not necessarily more sceptical towards deepfakes as found in previous research (Halpern et al., 2019), but they were more likely to believe true videos when they used social media frequently. One explanation for this would be that they see more news and entertainment videos and were more aware of the current topics.

Even though social media usage can help more than expected with the identification of deepfakes, direct repeated exposure is a stronger factor that can make deepfake more believable. As earlier research (Claypool et al., 2015; Garcia- Marques, Prada, et al., 2016; Garcia-Marques & Mackie, 2007) has shown, only having the video be repeated for one extra time already made it be perceived more accurate than the new unseen videos.

Interestingly, for willingness to share the videos, there was not necessarily an impact on the deepfakes, but there was on the true videos. Participants deemed the true videos less accurate if they were more willing to share them, which was the opposite of what was expected. This is quite different from what was found by Talwar et al., (2019).

(29)

29

5. CONCLUSION

Deepfakes are pushing the boundaries of reality and the massive spread of disinformation. There is a significant amount of research focusing on battling the algorithm with detection algorithms, which is a fierce battle. On this battlefield, the individual in their day-to-day life was forgotten, as it is more likely that you have encountered a deepfake than not. However, it might seem that they have their

arsenal to identify deepfakes. Our continuous presence on social media might have a positive impact in this regard as we can better discern between what is outrageous and fake news, and what is backed up by facts. Education and age might play a role in this as certain ideologies are pushed more into a bubble where there are more likely to believe fake news due to the nature of the politicians of this ideology and their behaviour of checking facts.

Our human biases will infer with this ability once in a while, as biases such as perceiving things are more accurate with repeated exposure seem to be valid for deepfakes as well. But it doesn’t seem that we have to be in System 1 or System 2 to be able to correctly identify a moving image that has been altered or fabricated altogether. As the world is becoming more polarized, social media will continue to exist and disinformation will continue to spread. Research for counter algorithms, education and awareness will carry on being important.

(30)

30

6. LIMITATIONS AND RECOMMENDATIONS FOR FUTURE WORKS

The main limitation of this research was the sample size. The sample was quite skewed, with the majority being between 18 and 34 years old and having completed a master’s degree. The sample also had extremely high trust in traditional media and high social media usage. This is most likely due to the data collection methods through acquaintances and posts on social media from the researcher.

There were some marginally significant results and trends that could have been more robust and significant with a higher sample size. Furthermore, looking at the

incomplete responses, the majority dropped off at the video section of the survey.

There were also some limitations regarding the scales. The scales adopted from Pennycook et al. (2015) and Pennycook & Rand (2019) were too limited to measure willingness to share, as it was a three-scale no, maybe, yes question. There should also have been more questions about willingness to share, e.g., more related to how they perceive videos shared by acquaintances. Also, the questions used opposite to the CRT might have been not been significant emotional enough compared to previous research, as they were trivia-based questions made by the researcher. Some feedback from the survey was also that some participants thought it mattered if they got these mindset questions right.

The final limitation is related to the setup of the experiment, as multiple research experiments used a more realistic setting, such as mock-ups from Facebook or news articles. Next to that, the video format changed quite a bit for desktop and mobile, which might give a different quality perception.

For future research, it would be to have a wider data collection. Also, it would be interesting to include political ideology, as this has been used by a lot of

(31)

31 researchers and some results show that the left-wing might be prone to believing fake news (Pennycook, 2018; Pennycook & Rand, 2019). This can also be related to the online behaviour of echo chambers (Halpern et al., 2019). If a deepfake could be created specifically for the experiment, there would be no outside factors that could impact familiarity or repeated exposure. There could also be more in-depth research on what individuals use as an identifier from fake and real, similar to Halpern et al.

(2019) and Talwar et al. (2019).

(32)

32

7. BIBLIOGRAPHY

Agarwal, S., Farid, H., El-Gaaly, T., & Lim, S.-N. (2020). Detecting Deep-Fake Videos from Appearance and Behavior. 2020 IEEE International Workshop on Information Forensics and Security, WIFS 2020. Scopus.

https://doi.org/10.1109/WIFS49906.2020.9360904

Albahar, M., & Almalki, J. (2019). DEEPFAKES: THREATS AND COUNTERMEASURES SYSTEMATIC REVIEW... Vol., 22, 9.

Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election.

Journal of Economic Perspectives, 31(2), 211–236.

https://doi.org/10.1257/jep.31.2.211

Alter, A., & Oppenheimer, D. M. (2009). Uniting the Tribes of Fluency to Form a

Metacognitive Nation. https://journals.sagepub.com/doi/10.1177/1088868309341564 Apuke, O. D., & Omar, B. (2020). Modelling the antecedent factors that affect online fake

news sharing on COVID-19: The moderating role of fake news knowledge. Health Education Research, 35(5), 490–503. https://doi.org/10.1093/her/cyaa030

Bialek, M., & Pennycook, G. (2018). The cognitive reflection test is robust to multiple exposures. Behavior Research Methods, 50(5), 1953–1959.

https://doi.org/10.3758/s13428-017-0963-x

Bornstein, R. F., & D’Agostino, P. R. (1994). The attribution and discounting of perceptual fluency: Preliminary tests of a perceptual fluency/attributional model of the mere exposure effect. Social Cognition, 12(2), 103–128.

https://doi.org/10.1521/soco.1994.12.2.103

Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820. Scopus.

https://doi.org/10.15779/Z38RV0D15J

(33)

33 Citron, D., & Chesney, R. (2019, februari). Deepfakes and the New Disinformation War.

https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new- disinformation-war

Claypool, H. M., Mackie, D. M., & Garcia-Marques, T. (2015). Fluency and Attitudes. Social and Personality Psychology Compass, 9(7), 370–382.

https://doi.org/10.1111/spc3.12179

Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., Kawata, A., Kovvuri, A., Martin, J., Morgan, E., Sandhu, M., Sang, R., Scholz-Bright, R., Welch, A. T., Wolff, A. G., Zhou, A., & Nyhan, B. (2020). Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior, 42(4), 1073–1095.

https://doi.org/10.1007/s11109-019-09533-0

Cochran, J. D., & Napshin, S. A. (2021). Deepfakes: Awareness, Concerns, and Platform Accountability. CyberPsychology, Behavior & Social Networking, 24(3), 164–172.

https://doi.org/10.1089/cyber.2020.0100

DataReportal. (2022). Global Social Media Statistics. https://datareportal.com/social-media- users

Dechene, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The Truth About the Truth: A Meta- Analytic Review of the Truth Effect.

https://journals.sagepub.com/doi/abs/10.1177/1088868309352251

Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. (2021). Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364

Evans, J. St. B. T. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7(10), 454–459. https://doi.org/10.1016/j.tics.2003.08.012 Fang, J., Shao, Y., & Wen, C. (2016). Transactional quality, relational quality, and consumer

e-loyalty: Evidence from SEM and fsQCA. 1205–1217.

(34)

34 Fletcher, J. (2018). Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New

Faces of Online Post-Fact Performance. Theatre Journal, 70(4), 455–471.

https://doi.org/10.1353/tj.2018.0097

Fletcher, R., & Nielsen, R. (2017). Are News Audiences Increasingly Fragmented? A Cross‐

National Comparative Analysis of Cross‐Platform News Audience Fragmentation and Duplication. Journal of Communication, 67. https://doi.org/10.1111/jcom.12315 Fraga-Lamas, P., & Fernández-Caramés, T. M. (2020). Fake News, Disinformation, and

Deepfakes: Leveraging Distributed Ledger Technologies and Blockchain to Combat Digital Deception and Counterfeit Reality. IT Professional, 22(2), 53–59.

https://doi.org/10.1109/MITP.2020.2977589

Frenda, S. J., Knowles, E. D., Saletan, W., & Loftus, E. F. (2013). False memories of fabricated political events. Journal of Experimental Social Psychology, 49(2), 280–

286. https://doi.org/10.1016/j.jesp.2012.10.013

Garcia-Marques, T., & Mackie, D. M. (2007). Familiarity impacts person perception.

European Journal of Social Psychology, 37(5), 839–855.

https://doi.org/10.1002/ejsp.387

Garcia-Marques, T., Prada, M., & Mackie, D. M. (2016). Familiarity increases subjective positive affect even in non-affective and non-evaluative contexts. Motivation and Emotion, 40(4), 638–645. https://doi.org/10.1007/s11031-016-9555-9

Garcia-Marques, T., Silva, R. R., & Mello, J. (2016). Judging the Truth-Value of a Statement In and Out of a Deep Processing Context. Social Cognition, 34(1), 40–54.

https://doi.org/10.1521/soco.2016.34.1.40

Groh, M., Epstein, Z., Firestone, C., & Picard, R. (2021). Comparing Human and Machine Deepfake Detection with Affective and Holistic Processing. arXiv:2105.06496 [cs].

http://arxiv.org/abs/2105.06496

Halpern, D., Valenzuela, S., Katz, J., & Miranda, J. P. (2019). From Belief in Conspiracy Theories to Trust in Others: Which Factors Influence Exposure, Believing and Sharing Fake News. In G. Meiselwitz (Red.), Social Computing and Social Media.

(35)

35 Design, Human Behavior and Analytics (pp. 217–232). Springer International

Publishing. https://doi.org/10.1007/978-3-030-21902-4_16

Hassan, A., & Barber, S. J. (2021). The effects of repetition frequency on the illusory truth effect. Cognitive Research: Principles and Implications, 6(1), 38.

https://doi.org/10.1186/s41235-021-00301-5

Iacobucci, S., De Cicco, R., Michetti, F., Palumbo, R., & Pagliaro, S. (2021). Deepfakes Unmasked: The Effects of Information Priming and Bullshit Receptivity on Deepfake Recognition and Sharing Intention. CyberPsychology, Behavior & Social Networking, 24(3), 194–202. https://doi.org/10.1089/cyber.2020.0149

Jacoby, L. L., Kelley, C., Brown, J., & Jasechko, J. (1989). Becoming famous overnight:

Limits on the ability to avoid unconscious influences of the past. Journal of Personality and Social Psychology, 56(3), 326–338. https://doi.org/10.1037/0022- 3514.56.3.326

Jacoby, L. L., & Whitehouse, K. (1989). An illusion of memory: False recognition influenced by unconscious perception. Journal of Experimental Psychology: General, 118(2), 126–135. https://doi.org/10.1037/0096-3445.118.2.126

Kantayya, S. (Regisseur). (2020). Coded Bias [Documentary]. 7th Empire Media.

Karnouskos, S. (2020). Artificial Intelligence in Digital Media: The Era of Deepfakes. IEEE Transactions on Technology and Society, 1(3), 138–147.

https://doi.org/10.1109/TTS.2020.3001312

Kasra, M., Shen, C., & O’Brien, J. F. (2018). Seeing Is Believing: How People Fail to Identify Fake Images on the Web. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 1–6.

https://doi.org/10.1145/3170427.3188604

Kietzmann, J., Mills, A. J., & Plangger, K. (2021). Deepfakes: Perspectives on the future

“reality” of advertising and branding. International Journal of Advertising, 40(3), 473–

485. https://doi.org/10.1080/02650487.2020.1834211

(36)

36 Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F.,

Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S.

A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096.

https://doi.org/10.1126/science.aao2998

Martel, C., Pennycook, G., & Rand, D. G. (2020). Reliance on emotion promotes belief in fake news. Cognitive Research: Principles and Implications, 5(1), 47.

https://doi.org/10.1186/s41235-020-00252-3

Masood, M., Nawaz, M., Malik, K. M., Javed, A., & Irtaza, A. (2021). Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward.

http://search.ebscohost.com/login.aspx?direct=true&db=edsarx&AN=edsarx.2103.00 484&site=eds-

live&scope=site&custid=ns000558&groupid=main&profile=eds&authtype=ip,guest Murphy, G., & Flynn, E. (2021). Deepfake false memories. Memory, 1–13.

https://doi.org/10.1080/09658211.2021.1919715

Orlowski, J. (Regisseur). (2020, januari 26). The Social Dilemma [Documentary]. Netflix.

Partadiredja, R. A., Serrano, C. E., & Ljubenkov, D. (2020). AI or Human: The Socio-ethical Implications of AI-Generated Media Content. 2020 13th CMI Conference on

Cybersecurity and Privacy (CMI) - Digital Transformation - Potentials and Challenges(51275), 1–6. https://doi.org/10.1109/CMI51275.2020.9322673 Paul, K., & Culliford, E. (2020, mei 26). Twitter fact-checks Trump tweet for the first time.

https://www.reuters.com/article/us-twitter-trump-idUSKBN232389

Peele, J. (Regisseur). (2018). You won’t believe what Obama says in this video! Buzzfeed.

Pennycook. (2018). Prior exposure increases perceived accuracy of fake news.

Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The Implied Truth Effect:

Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science, 66(11), 4944–4957.

https://doi.org/10.1287/mnsc.2019.3478

(37)

37 Pennycook, G., Cheyne, J., Barr, N., Koehler, J., & Fugelsang, J. (2015). On the reception

and detection of pseudo-profound bullshit. Judgement and Decision Making, Vol.

10(N. 6), 549–563.

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID- 19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy- Nudge Intervention. Psychological Science, 31(7), 770–780.

https://doi.org/10.1177/0956797620939054

Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. https://doi.org/10.1016/j.cognition.2018.06.011

Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2), 185–200. https://doi.org/10.1111/jopy.12476

Pennycook, G., & Rand, D. G. (2021). The Psychology of Fake News. Trends in Cognitive Sciences, 25(3), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2018).

FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces.

arXiv:1803.09179 [cs]. http://arxiv.org/abs/1803.09179

Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991).

Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61(2), 195–202. https://doi.org/10.1037/0022- 3514.61.2.195

Stanovich, K. (2004). The Robot’s Rebellion: Finding Meaning in the Age of Darwin.

Bibliovault OAI Repository, the University of Chicago Press.

https://doi.org/10.7208/chicago/9780226771199.001.0001

Talwar, S., Dhir, A., Kaur, P., Zafar, N., & Alrasheedy, M. (2019). Why do people share fake news? Associations between the dark side of social media use and fake news

(38)

38 sharing behavior. Journal of Retailing and Consumer Services, 51, 72–82.

https://doi.org/10.1016/j.jretconser.2019.05.026

Thorson, E. A. (2016). Belief Echoes: The Persistent Effects of Corrected Misinformation.

Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1), 2056305120903408. https://doi.org/10.1177/2056305120903408 Van Duyn, E., & Collier, J. (2019). Priming and Fake News: The Effects of Elite Discourse on

Evaluations of News Media. Mass Communication and Society, 22(1), 29–48.

https://doi.org/10.1080/15205436.2018.1511807

Warzel, C. (2018, februari 11). Believable: The Terrifying Future Of Fake News.

https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-future-of-fake- news#.eo3l7O866

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe.

Warzel, C. (2018, February 11). Believable: The Terrifying Future Of Fake News.

https://www.buzzfeednews.com/article/charliewarzel/the-terrifying-future-of-fake- news#.eo3l7O866

Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 39–52.

https://doi.org/10.22215/timreview/1282

Wood, J., & Sanders, N. (2020). Dealing with ‘Deepfakes’: How Synthetic Media Will Distort Reality, Corrupt Data, and Impact Forecasts. Foresight: The International Journal of Applied Forecasting, 59, 32–37.

Referências

Documentos relacionados

Contou aquele evento ainda com o apoio do Instituto de Arqueologia e Paleociências e do Instituto de História Contemporânea, dois centros de investigação da Faculdade de

We also showed that resistance to rhEPO therapy is associated with “functional” iron deficiency, lymphopenia and CD4+ lymphopenia, higher elastase plasma levels,

(…) A arte, com o quer que a definamos, está presente em tudo o que fazemos para agradar os nossos sentidos.” Assim, a Arte permite que a expressão ocorra por uma via não- verbal, o

A recolha de informação teve como suporte um questionário, elaborado com base na pesquisa bibliográfica sobre a temática, constituído por questões de caracterização

Como referido anteriormente, de entre os produtores que optam por ter épocas de cobrição definidas, a grande maioria (73,3%) optam por não realizar flushing e portanto não

O titular da pretensão jurídica terá prazo para propor ação, que se inicia (dies a quo) no momento em que sofrer violação do seu direito subjetivo, se o titular deixar escoar

Para cada uma das simulações, foi também realizada uma comparação entre a previsão do modelo WRF e a previsão de um fornecedor externo, com o intuito de avaliar o

Assim, para avaliar a reação de uma dada cultivar a NGR, será necessário determinar não só a reprodução do nemátode em relação a uma população inicial conhecida