Keywords

1 Introduction

Technological advances have threatened the jobs of many Americans. According to an article from Forbes, almost 47% of current jobs are vulnerable due to automation (Quora 2017). These jobs have mostly involved jobs involving physical labor. Now, another big technological threat pertaining to a technology-based non-human competitor is coming, Artificial Intelligence. For example, in an event involving human and AI competitors, an AI Go player, AlphaGo, beat all the highly ranked human Go players: The competitive cognitive abilities of AI compared to humans demonstrated the limitation of human intellectual prowess. Moreover, even one of Silicon Valley superstars, Elon Musk, warned that in the near future many people would lose their jobs due to technological innovations such as these and be forced to depend on government handouts. Even though Musk’s views are extreme, a dynamic change in the job market due to AI would be consistent with the history of the emergence and disappearance of jobs due to technological development.

However, an optimistic view toward this AI threat is the perspective that some domains remain the exclusive purview of humans. Creativity is one such domain that is sometimes considered the final wall that AI cannot breach. One World Economic Forum article suggests creativity will become one of the most important ability domains for job seekers in 2020 (Gray 2016). However, AI developers are trying to build a system that can also conquer the area of creativity, using tools such as Google’s DeepDream making works of art (Stecher 2017). So, academics have begun to ask whether creativity is a sacred terrain that is a domain restricted to human beings or whether this is a domain where artificial intelligence can also excel. The approach of this study is to see if human judges are likely to ascribe creativity to the artistic creations of artificial intelligence compared to humans when those human judges are judging objectively the same art: That is, do stereotypes and biases affect human perceptions of art when the characteristics of the artists are differentially described (i.e., as AI or human).

Communication involving messages is often construed as involving a sender and a receiver: This is also the case for human-computer interaction. Google and other AI companies, who are developing their AI capacity to make artwork have already demonstrated that perceivers cannot differentiate whether the artwork is produced by humans or produced by AI, essentially passing the Turing Test (Emerging Technology from the arXiv 2017). This suggests that these companies in their role as “sender” have already made great strides. What we don’t know, however, is how participants would react differentially as “receivers” to art that they are told was produced by AI (versus a human). Indeed, not only is there very little research examining this issue, the topic has not even been acknowledged as a frontier for future investigation (Russell et al. 2015), perhaps due to the fact that relatively few researchers operating in this domain are communication scholars. Thus, one goal of this study is to broaden the perspective that ability or capacity of artificial intelligence should not be measured solely in terms of its potential societal effects (economically, legally, ethically, etc.), but also in terms of human’s perceptions of the meaning of what AI does.

Some (Erden 2010) have argued that artificial intelligence can only seem to be creative, but it is not actually creative. Similarly, others have argued that what makes art – art—involves the intentions of artists, which of course, AI lack (Searle 2007). However, I argue here that typically most people in evaluating art do not, in fact, are less influenced by intentions of the artist (Omasta 2011). What they do know is what they perceive: That is that they perceive the work to be art or not. My research question is whether people are willing to deem the product that is made by artificial intelligence as “art.” Therefore, this study concerns investigating the perception of audiences. I argue that if artificial intelligence can be seen to be creative and people cannot prove it is not technically creative, then it is creative, based on the idea that the world is the world we see. In this study, the purpose was to find out the difference in evaluation of the same artwork based on two different schemas triggered by noticing identity of the artist differently, using experimental focus group discussion setting.

2 Theoretical Background

Schema Theory provides a useful theoretical framework for understanding audience perceptions of art based on the identity of the artist. Schema theory has philosophical roots but also more modern psychological underpinnings that are applied here in the current research (Dahlin 2001). Schema is “an active processing data structure that organizes memory and guides perception, performance, and thought” (Norman and Rumelhart 1981). Schemata, for example about art, would include knowledge about art concepts, our perceptions of what makes art more or less artistic, art we have viewed and enjoyed or not, situations in which we have viewed art, and so forth. Humans also have schemata that include stereotypes about artificial intelligence and the creativity of AI. According to Dixon (2006), “These stereotypes are part of an associative network of related opinion nodes or schemas that are linked in memory, and activating one node in network spreads to other linked nodes” (p.163). Schemas, based on prior experience, can be activated when we are interpreting new given information. Thus, it is possible to say that schema and bias (or stereotype) function similarly in the cognitive process (Dixon 2006). Schema theory is an adequate theory to illustrate how a stereotype may affect cognitive processing: For example, when we view someone of another race, we may activate a schema that affects how we process information about that person. Not surprisingly, Schema theory is widely used in media influence studies where researchers are interested in how bias affects individuals’ media portrayals of certain ethnicity and influence media users’ perceptions. I argue that an art piece is a medium with messages, so schema theory is applicable to research focused on artwork. Hence, the theory was applicable to understand how stereotypes toward artificial intelligence and its creativity may alter perceiver’s views toward artificial intelligence’s artwork.

3 Research Question

This study examines the interplay of schema about the creativity of artificial intelligence and the evaluation of AI creating art and its artworks. The creativity is what often deemed as the property of human beings and artificial intelligence (Finn 2017). Thus, this research tried to seek how an understanding of an artist in such way, artificial intelligence that is believed to have less creativity in this case, influences the understanding general idea of AI creating art and its artwork.

  • RQ 1: How different assumptions about the identity of the painter (AI or human) of the same art piece will influence the evaluation of the art piece?

  • RQ 2: How different assumptions about the identity of the painter (AI or human) of the same art piece will influence the idea of artificial intelligence creating art?

4 Method

4.1 Overview and Design

Two types of focus groups (one knowing an artist of a painting as a human and the other knowing it as AI) were formed through group blocking in order to keep the same ratio of educational level and gender in each group and within that ensure random sampling. Across all groups, as participants arrived, a picture was presented on a monitor screen. When all participants were present, they were told that the picture was either produced by a human or by AI. Then, based on the impression from the picture, participants engaged in a discussion based on the following: (a) participants were asked to define art (b) participants were asked whether AI can make art and (c) participants were asked if given work (which was the same in all groups) is “art.” A group with information that a painter of a shown image was artificial intelligence was put as an experimental group and a group with information that a painter is a human artist was put as a control group. The responses from two groups are compared.

4.2 Participants

Twenty-eight participants were recruited from University of Southern California, both undergraduate and graduate students from various fields. The aim here was to capture diverse points of view toward the concept of art and more generalizable mutual understandings of artificial intelligence. Still, participants were recruited from a single school (USC) to make research participants feel more relaxed, and thereby more likely to participate openly in the focus group (Corfman 1995). Also, the identity of participants was not revealed until the study was finished. This was accomplished by having participants use a fake name and not sharing any further personal information, such as their major and whether undergraduate or graduate students. This led to the presumption that other participants are also recruited students. Such limited information sharing functioned not only to protect personal information but also to have a comfortable setting through assuming homogeneity of the population. Even though the research involved randomly sampling within a single university, there was a gender bias with 24 participants who were female, and 4 participants who were male. The average age of participants was 22.4 years (SD = 4.3). Table 1 contains a summary of participants’ other key demographic data.

Table 1. Focus group participants’ demographic characteristics (n = 28)

This study involved multiple focus-groups (4 in all; two of each type) and the similar ratio of educational level and gender was maintained. This was done, since if one group consists of all graduate students and the other groups consist of undergraduate students, differences between two groups may due to educational level (rather than a response to the knowledge about the identity of the artist). Thus, two graduate students and five undergraduate students were deployed in each of four groups, while keeping a similar gender ratio. Before the experiment began, a researcher briefed students on the purpose of the study and how the study will proceed with the focus group moderator. After providing necessary information and procedure of the study, the moderator collected informed consent before starting the focus group discussion. Because there was no reward for participating the research, the survey was completed by students who are interested in the study.

4.3 Procedures

The research method was mainly a focus-group discussion given that the understanding of art requires diverse and unexpected perspectives, considering the ever-changing nature of art (Fokt 2017). Even though focus group method was applied, there was a difference with a regular focus group due to its setting. In general focus-group discussion, participants are recruited with a single characteristic (e.g., cultural background, social position, etc.) that meets the purpose of research (Poindexter and McCombs 2000). However, this was a more experimental approach since the independent variable here was whether participants eventually were told that the artwork was made by AI or human. Unlike the general focus group discussion, a goal of the study was to compare participants’ reactions in the two types of focus groups.

For both groups, a picture of “Standing in the sky” by Tanya Schultz (2014), a piece of hand-crafted artwork with pastel tone patterns was presented that appears digitally/graphically constructed (see Fig. 1). The reason for this selection was to avoid identification of the artist as the image is vague to determine whether it is handcrafted or digitally produced. While showing the same picture of artwork to every group, two groups were notified that the art is produced by a human artist (hereinafter “Human Artist Group”) (n = 14) and the other groups were notified that the art was produced by artificial intelligence (hereinafter “AI Artist Group”) (n = 14). Before the focus-group discussion, an open-ended survey with the same questions from the focus-group discussion was given to prevent the Bandwagon effect, participants following arguments of opinion leader despite their actual belief. The questionnaire asked about three general sets of questions, which all involved the impression about the given artwork, their own definition of art, and the artificial intelligence’s eligibility to produce artwork. After the focus group discussion, participants received a debriefing sheet that tells the actual goal of this research and the actual identity of the artist.

Fig. 1.
figure 1

Standing in the sky.

4.4 Coding and Analysis

After the responses from series of focus discussions were gathered by audio recording, the recordings were transcribed and the transcriptions were organized using NVivo, a quantitative research tool. Using the programs, commonly used terms and words from the conversation were colored and put into categories. Also, terms with similar meanings that can be drawn from their contexts were colored, even though the words that participants used were not identical. The analysis of these data was done by comparing groups of colored terms and themes between two research groups based on three categories from the research question of this research.

5 Results

The issues discussed in four focus group discussions were broadly classified into the following categories: (1) deeming the shown work as art, (2) definition of art, and (3) capability of AI making art.

5.1 Deeming the Shown Piece as an Art

We assumed art pieces we face are generally made by a human artist. Thus, for participants in the “Human Artist Group,” a question asking whether the given artwork seems to be “art” was identical with a question of asking the artistic value of a piece of modern art. Thus, the evaluation they made was focused solely on the artistic value of the image without any bias about its artist. On the other hand, participants in the “AI Artist Group” had in mind that the image was produced by artificial intelligence when evaluating it. Thus, while assessing the pure artistic value of the work, there was another layer of cognitive process that involved the consideration that the artist is not human.

26 out of 28 participants in both the “Human Artist Group (n = 14)” and the “AI Artist Group (n = 12)” answered that they thought that the given image was “art.” It was an interesting outcome as there were those in the “AI Artist Group” with the knowledge that the shown image is made by an artificial intelligence program, who also said that artificial intelligence could not make art. One distinctive argument for deeming the work as “art” was that it was based on their impression, not logical approaches. For example, one participant in the “AI Artist Group” said, “I like the colors and it makes me happy” and another participant in the “Human Artist Group” stated that “This piece of work evokes my emotion and therefore I consider it art.” However, deeming it as art did not always lead to high satisfaction of the work. There was an opinion from a person thinking it as art saying, “It is very trippy. It is also not very traditional, though.” There was a participant who rated the artwork two out of five with a mention “I have no inherent attraction to it” while saying she thinks the piece was still art. It is also found that both of the two participants who did not deem the work as art were from “AI Artist Group.” A person who said the piece is not art made rather a logical approach to determining the type of the image, not focusing on how he felt, saying, “it really is like computer graphics.”

5.2 Definition of Art

An individual’s personal definition of art is one of schemas people have since its meaning is constructed through cumulated knowledge and experience (Harris and Sanborn 2014). The “AI Artist Group” versus the “Human Artist Group” differed in that the former group was shown an image of a supposed AI artist’s artwork while the latter group presented with the same image was told that this art was produced by a human artist. So only the former group (“AI Artist group”) could have had a schema of artwork modified by this manipulation (i.e., now potentially inclusive of AI artwork in the general schema of artwork).

When comparing the terms participants used to define art between two groups, it was found that there were terms shared in both groups, which were that art was “an expression”, “creativity”, “a comprehensive approach”, and “a message.” “Expression” was the 1st ranked and “message” was the 5th ranked term within each group for conceptualizing “art.” Even though the same terms were used, there was a difference shown in the frequency of the term usage between these two groups from 2nd to 4th ranked terms (see Table 2). The participants in the “Human Artist Group” stressed creativity (2nd ranked term) as a crucial factor in understanding the art concept while the “AI Artist Group” participants’ 2nd ranked term was “comprehensive approach” by which they meant a broad definition of art (e.g., “anything can be art”, “whatever a person calls out”, “Anything”, and “there are so many ways to be artistic”). However, for the “Human Artist Group” a “comprehensive approach” was the 3rd ranked concept of art, and by the “comprehensive approach” they meant something quite different. That is the art itself had to be “created” - a view that strongly implicated the causal role of the human in creating art (e.g. “anything that is created”, “something created by human”, and “anything created in some way.” Thus, the art-making-artificial intelligence-schema was not activated for the “Human Artist Group,” meaning that those participants apparently had no idea that art could be produced by an entity other than a human. Thus, the word “created” frequently within the “Human Artist Group” indicates that which is only “created by a human artist,” clearly excluding the “AI artists” from this conceptualization.

Table 2. The terms used in the definition of art discussion and its ranking

Another difference between two groups is a unique term that was only used within each group. The term that was used only by the “Human Artist Group” was “interactivity.” One participant in the group said, “(art is) a form of expression for interacting with the audience, transmitting intentions, and expressing idea and feelings.” Another participant in the group said, “art is a tool for communication” and communication here is, again, between artists and audiences. How the “AI Artist Group” participants approached the concept of art distinctively was in seeing art objects as stimuli that can arouse sense perceptions. What was said related to this context was “(art should) aesthetically engage viewers” and “(art should be) something that stirs emotion or some sort of reaction.” The difference between these two groups’ unique approach was that one (“Human Artist group”) required mutual and two-way message sending and receiving while the other (“AI Artist group”) sees art as one-way communication.

5.3 Capability of AI Making Art

A direct question “Can AI make art?” led to further insights into help focus group participants conceptualized AI art. This was the first question for the “Human Art group” to be asked about their view about artificial intelligence. Hence, unlike the concept of artificial intelligence could be only seen to the “AI Artist group” in the previous two questions, this question enables us to put both groups in the same stance. Moreover, as the priming effect had brought up the most recently activated stimuli, the “AI Artist Group” in which AI creativity related schema was apt to have been actively activated due to the discussion based on previous questions (Jeong and King 2010). On the other hand, the “Human Artist Group” was at the stage where AI creativity related schema was far from being activated. Therefore, it is still possible to compare cognitive outcomes due to the influence of the schema.

For both groups who agree with the given statement, it could be found that there were two reasoning themes for the argument that were identical in both groups. One same theme between the two groups was “art with a different value.” The participants argued artificial intelligence can make art, but the art it makes should be distinguished with art that human artists make. The arguments related to this opinion were, “yes, but not original art”, “It is art but less valuable due to less uniqueness”, and “instinctively perceive it as an art but not like traditional artworks.” First two statements were based on the idea that originality and uniqueness are values of art that artificial intelligence cannot perform or not as good as a human is capable of. Also, a word “instinctively” can be linked to the impression-based decision of deeming the given art piece as “art.” The other theme was “personal preference,” such as “Because I liked it, I would still feel it as an art, no matter who made it.” Similarly, another participant in the “AI Artist Group” said, “No matter how it is produced, it is art if I like it.” Even though both groups shared similar reasonings, there were also two different reasonings to reach the same idea that AI can make art.

The “Human Artist Group” participants who agreed that artificial intelligence could make art employed two distinctive themes to support their argument. The most often argued theme among overall themes of reasoning was “the artificial brain.” This indicates their presumption that artificial intelligence would function as same as human brain functions, even though it is artificially made. The relevant statements were, “There is a possibility that AI might have a feeling later on if it functions identically with a human brain, and this gives the possibility of AI making art” and “Because it is built to be similar to a human as much as possible, its creation should be deemed as a human creation.” Also, there was a view that having “intention” to create art is an important factor to consider the capability of AI making art. When there was a conversation related to this topic, there was a question that “what is more artistic between artworks painted by an elephant and produced by artificial intelligence?” One participant said, “Unlike an elephant can’t be an artist, AI is more of an artist as it possesses the intention of producing art coming from its programmer since its development.”

There were also two themes that were shown distinctively among the “AI Artist Group.” One was “defined by purpose.” One participant said AI could make art “if it is programmed to do so.” His view oriented from the idea that purpose is what defines an action, no matter how others perceive it. If a programmer built an AI to make art, he thought everything created by the AI should be deemed as “art.” This is dissimilar to the theme “intention” as “defined by purpose” saw AI as a purely passive tool. Another approach that supported the capability of artificial intelligence creating art was “comprehensive approach.” It is the same theme that was shown in the previous discussion of deeming the given image as “art.” The approach made here was to broaden the concept of art from the art that we see in an art museum. One participant said, “AI itself could be deemed as art so its creation should also be deemed as art…I am art, too” (see Table 3).

Table 3. Themes in the capability of AI making art discussion (agreed) and its ranking

Unlike participants who agreed, participants who disagreed that artificial intelligence can make art had no distinctive difference between the groups. Both groups chose “lack of human value”, “authority”, and “incapacity of AI” as main reasons artificial intelligence cannot produce artwork. The most frequently supported theme was artificial intelligence’s lack of human values. Many of the “lack of human value” arguments from the “AI Artist Group” were focused on “feeling,” such as “No computer’s going to tell him, ‘Oh this is how I felt, so I drew this.’ No, I don’t think that’s ever going to happen.”, “It’s already creative, but you can’t have feelings”, “it has a brain (thinking process), but it does not have the heart (feeling)”, and “You need expression, thought, and feelings to make art.” However, this theme was not limited to feeling only, and there was “an instinctive denial.” One participant said, “Even though admitting the possibility of AI being creative, I don’t want to admit anything produced by AI as art once knowing that it is made by AI. I think making art only belongs to a human.” She said she could not eloquently explain the reason that she felt that way, but she could not accept the fact that what artificial intelligence makes can be viewed as art. “Human Artist Group,” on the other hand, said the same argument that AI cannot make art due to the “lack of human value” with different approaches. Participants in the group said “Even though it mimics or possesses emotion, what it produces cannot be art as it does not have a human spirit and the AI emotion is not human emotion” and “The effort that is put to make art by a human is different compared to the effort of AI.” Even though both of these two arguments appealed the lack of human values, they made dissimilar approaches as one is based on a measurable value (effort), while the other is based on what cannot be measured (spirit). One interesting fact in this discourse was that the “AI Artist Group” was particularly more active when arguing artificial intelligence cannot make art due to the “lack of human values.”

“Authority” is an argument based on the human role, especially programmers, in the creativity of artificial intelligence. It is revealed well in their statements, “The AI art is an extension of whoever created the AI” and “It is still the programmer who makes art, not AI.” As seen here, the idea of the authority of creativity is linked to the doubt that artificial intelligence is fully independent of its creator. “Incapacity of AI” was another theme both groups used to support their argument that AI cannot make art. They focused on the inherited structure of artificial intelligence that limits its capacity to make art. For instance, a participant in the “AI Art Group” said, “AI might have feeling, but its feeling would not be as delicate as human’s emotion.” This is different with the “lack of human value” theme since it still admits artificial intelligence can have emotion. Similarly, a person in the “Human Artist Group” said, “biologically living things have emotion but AI cannot since it is not naturally born.” This argument did not focus on the fact that AI is not human, but rather focused on the fact that AI is not “biologically” created. The person talked about feeling, like pain, which she thinks can be felt only by biologically constituted entities. Even though artificial intelligence can have emotion, such feelings were deemed to be impossible for it. There was a theme that was only shown in the “AI Artist Group,” which was “perfect and logical.” One participant said “Creativity comes out from unexpected circumstances or even through mistakes. However, AI perfectly calculates to avoid such mistakes.” Another participant in the same group used terms “preprogrammed” and “predictable” which he believed those are contradictory to the concept of art (see Table 4).

Table 4. Themes in the capability of AI making art discussion (disagreed) and its ranking

6 Discussion

This study aimed to identify the influence of bias and schema toward artificial intelligence creating art in order to develop the idea that the ontology of creativity of artificial intelligence is based on the audience’s perception. In this section, major findings of this study, the limitations and implications for future research will be discussed.

6.1 Major Findings

This study showed that admitting the capability of AI to create art can be done based on what contexts are used when it is asked. When participants in the “AI Artist group” were asked, “Can AI make art?” quite a few participants (n = 10) answered “no” based on a logical approach involving letter-triggering schema. The participants added that artificial intelligence could not create art because it cannot have a feeling, intention, and the possibility of creating mistakes. However, when the same participants were shown an image with the information that it was produced by artificial intelligence and asked, “Do you think this image is ‘art’?” most of the participants (n = 12) said “yes” based on an impression-based approach involving image-triggering schema. The participants answered based on how they felt after seeing the picture and argued that their feeling was crucial when deciding what is “art.” There was a participant who believed that anything produced by artificial intelligence should not be viewed as “art.” While informed that a shown image is produced by artificial intelligence, she deemed it as “art” because she thought it was “cool.” Thus, application of the creativity of artificial intelligence in the real world would be done apart from how it is viewed in related discourses. In other words, even though there are pessimistic views toward artificial intelligence being creative, the markets of products created by artificial intelligence may function independently from such views based on how it is marketed. Therefore, the context of the message should be analyzed beforehand to persuade consumers the creativity of artificial intelligence.

Another outcome of a question asking the capacity of AI creating art was that the “Artist as AI” schema aroused stronger disapproval toward artificial intelligence creating art among the “AI Artist group,” compared to the “Human Artist group.” The information that a painter of a shown image is artificial intelligence was only given to the “AI Artist group.” In other words, the “AI Artist group” had a recent experience with an AI-made painting, while the “Human Artist group” did not. In this setting, the “AI Artist group” was triggered by the schema about artificial intelligence, but the “Human Artist Group” possesses a similar opinion with the public as they do not have any triggered schema. Thus, the “AI Artist group” was set as an experimental group and the “Human Artist group” was set as a control group. In comparing the two major types of focus groups, what was distinctive was the outcome that the “Artist as AI” schema led to a stronger tendency toward the belief that AI cannot produce art. Among those who argued that artificial intelligence cannot make art in both groups, the “AI Artist group” was more active during the discussion and used more various terms and attempted diverse approaches to strengthen the argument, compared to the “Human Artist group.” Contrarily, the “Human Artist group” was more active compared to the “AI Artist group” among those who argued that artificial intelligence can make art.

One unexpected finding from the discussion was about the origins of AI creativity. The conversation related to the authority of the creativity was often mentioned during the discussion, which was one approach to argue the limitation of artificial intelligence to create art, based on its structure. There were arguments that artificial intelligence cannot be creative because it is a mere representation of its programmer’s creativity. Jennings (2010) argued that creativity of artificial intelligence can be fulfilled only when automation of the system from its programmer is guaranteed, such as artificial intelligence altering its own evaluation standard without any input from its programmer. The participants’ reactions related to the authority may be due to the insufficient trust toward the autonomy of artificial intelligence, even though the brief explanation of artificial intelligence, including its autonomy, was given before the discussion. Thus, the reaction is less likely due to the misunderstanding of the concept but rather due to assumptions that creative artificial intelligence creating data by itself is hypothetical, as one participant remarked: “Would it still need input to make any output?” Hence, one way to diminish a negative stereotype toward artificial intelligence being creative is to successfully persuade the public its autonomy. This is similar to Colton’s (2008) argument that providing information on how a software functions is crucial to prove the creativity of a computational system.

6.2 Limitations

Even though the study was carefully planned and conducted, there were limitations that should be considered for future studies. First, there was an imbalance of gender distribution of participants. For this study, there was no distinctive gender-based difference shown in perspectives of both contents or participation. Still, since this study used an experimental approach, having balanced gender distribution would have increased both internal and external validity, since the issue of artificial intelligence or art is not a single gender-related topic. Also, there was an issue of using both quantitative and qualitative research methods, since this study employed the QUAN-qual design, starting a question from the quantitative theoretical idea and using the qualitative method as supplement component (Morse et al. 2006). There were few disadvantages in using the research design. For participant recruitment, the focus group discussion requires homogeneity of participants while experiments receive higher validity through random-sampling. Also, it was incapable to conduct focus group discussion with the number of people that fulfills the analytical validity requirement.

6.3 Implications for Future Research

Wilson (1983) conducted a study about art-producing artificial intelligence before AI was familiar to the public, emphasizing the role of audiences, as this present study did. However, his study did not extend the inquiry to audience bias and understanding of creativity of artificial intelligence, which this research attempted. This study started with the idea that the stereotype toward artificial intelligence being creative would cause different perceptions and cognitive understanding of the artwork produced by AI. A positive relation between accepting AI producing art and deeming art created by AI as “art” was predicted, but results showed a contrary outcome. Through comparing two groups, two major findings were found: (1) there was a tendency that schema influenced the logical decision, and (2) this tendency strengthened the idea that artificial intelligence cannot create art. However, the schema did not alter the impressionistic process, how they felt about a painting produced by artificial intelligence, which leads to admitting it as “art.” Because this was not an anticipated result, an inquiry designed to investigate the discrepancy is needed. Further studies focusing on how different contexts of similar messages influence the schema processing will bring insight.

Also, in employing a qualitative research method, it was possible to more deeply investigate how bias toward artificial intelligence influences the perception of what it is and what it is capable of, through investigating what values audiences had and used to make decisions. These values can be utilized to develop the scale for measurement of perception and evaluation of artworks created by artificial intelligence. These findings can support future quantitative research on the perception and cognitive process involved when receiving unfamiliar information about new technology. The speed with which technological developments occur often leave the public with very little technical knowledge of them and require expert explanation. This lack of understanding of new scientific information can lead to the public assumption that the development is a threat to them (Haynes 2013). Further research about how the public understands artificial intelligence and new technologies can help to explain technological development more persuasively and make them more accessible.