Exploring Trust in Human -- AI Collaboration in the Context of Multiplayer Online Games

Tags
Date
Aug 30, 2023 1:30 AM

Open AccessArticle

1

School of Health Sciences, Guangzhou Xinhua University, Guangzhou 510520, China

2

School of Management, Zhengzhou University, Zhengzhou 450001, China

3

School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou 510520, China

Author to whom correspondence should be addressed.

Systems 2023, 11(5), 217; https://doi.org/10.3390/systems11050217

Received: 12 February 2023 / Revised: 4 April 2023 / Accepted: 20 April 2023 /

(This article belongs to the Special Issue Human–AI Teaming: Synergy, Decision-Making and Interdependency)

Abstract

Human–AI collaboration has attracted interest from both scholars and practitioners. However, the relationships in human–AI teamwork have not been fully investigated. This study aims to research the influencing factors of trust in AI teammates and the intention to cooperate with AI teammates. We conducted an empirical study by developing a research model of human–AI collaboration. The model presents the influencing mechanisms of interactive characteristics (i.e., perceived anthropomorphism, perceived rapport, and perceived enjoyment), environmental characteristics (i.e., peer influence and facilitating conditions), and personal characteristics (i.e., self-efficacy) on trust in teammates and cooperative intention. A total of 423 valid surveys were collected to test the research model and hypothesized relationships. The results show that perceived rapport, perceived enjoyment, peer influence, facilitating conditions, and self-efficacy positively affect trust in AI teammates. Moreover, self-efficacy and trust positively relate to the intention to cooperate with AI teammates. This study contributes to the teamwork and human–AI collaboration literature by investigating different antecedents of the trust relationship and cooperative intention.

Keywords:

human–AI collaboration; trust; teamwork; intention to cooperate with AI teammates

1. Introduction

The proliferation of artificial intelligence (AI) technologies has led to numerous companies investing significant resources in developing AI-related services in all walks of life. The 2022 AI index report shows that AI has become more affordable with better performance [1]. The trend that training cost becomes lower while training time becomes faster facilitates the adoption of AI technologies in the commercial area. Against this background, exponential growth in the AI area facilitates the interaction between humans and AI agents [2]. Human–AI collaboration in which AI agents work as interdependent teammates to cooperate with humans toward common goals has become an irreversible trend [3].

Human–AI collaboration is a major area of interest within the field of management because it will enable decision-makers to perform meaningful actions to advance AI technologies [12]. There have been studies investigating human–AI collaboration in different contexts, such as data science [4] and piloting [5]. Though a recent work has researched human–AI teaming in the context of multiplayer online games and identifies factors that influence people’s willingness to collaborate with AI teammates and their preferred features of AI teammates [6], much of the research up to now has been descriptive in nature. There has been no detailed investigation of the mechanisms that influence humans’ intentions to work with AI teammates in highly complex environments.

Trust is at the core of understanding new technology adoption [7] and virtual team collaboration [8]. Trust in the team and trust in AI teammates are influenced by the interactions between humans and AI teammates [9]. In the context of human–AI collaboration, humans and AI cooperate toward the same goal and the collaboration involves the interaction between humans and AI, personal characteristics, and environmental characteristics [1011]. Trust may explain the mechanisms of how different characteristics (i.e., interactive characteristics between humans and AI, environmental characteristics, and personal characteristics) influence trust, and then further affect human–AI collaboration intention. This study aims to investigate the influencing mechanisms of different characteristics on people’s collaboration intention with AI teammates. Multiplayer online games were selected as the context in this investigation because the adoption of AI technologies in multiplayer online games involves a dynamic flow and complex collaborations [6]. Single-player games are played in a single-player mode that does not require the cooperation of others and does not involve teamwork. Multiplayer games are played with two or more teammates working together for the goal of the match. In teamwork, the mutual trust between teammates is an important factor affecting the willingness to teamwork. The decision-making process of working with AI players in online games is a typical representative of human–AI collaboration. AI players act as virtual agents. In the field of online games, AI players have become the opponents or teammates of many real players, so what affects the willingness of real players to choose AI teammates? Thus, this investigation presents the following research questions: (1) what factors influence trust in AI teammates and intention to cooperate with AI teammates and how? (2) how does people’s trust in AI teammates influence their intention to cooperate with AI teammates? To answer these research questions, this work reviews previous research and develops and empirically tests a research model of human–AI collaboration. Based on the scene characteristics of online game teamwork, this study involves the interactive characteristics of online game platforms, such as perceived anthropomorphism, perceived rapport, and perceived enjoyment, as well as personal characteristics and environmental characteristics. The results of this study can provide a reference for other human artificial intelligence collaborative environments.

This paper presents the following theoretical contributions. First, this investigation contributes to the online games literature by studying the adoption of AI technologies in online games. Second, this study lays the groundwork for future research into relationships in human–AI collaboration. Third, this study adds to the team collaboration literature by identifying the influencing factors of trust perceptions and intention to cooperate with AI teammates. This work will extend our understanding of users’ views on human–AI collaboration.

The rest of this paper is arranged as follows. In the next section, the theoretical background, including human–AI collaboration and trust, is discussed. In Section 3, the theoretical model and hypothesis development are presented. In Section 4 and Section 5, the research method, which involves the data collection and survey development, and data analysis are discussed. Then, Section 6 concludes the paper and discusses the theoretical contribution and the practical contribution.

2. Literature Review

2.1. Human–AI Collaboration

With the advancement of technologies, human–AI collaboration has attracted the attention of scholars and practitioners in improving teamwork [912]. Team collaboration can benefit from present technological team support by adding value to teams [12]. While traditional teamwork indicates two or more people working together toward the same goal, human–AI collaboration describes the interaction process of human and AI machines. Consistent with previous studies [13], human–AI collaboration in the current investigation means the AI machine works as interdependent teammates to collaborate with humans toward a common goal, such as solving problems, gaining insights, and creating values. Human–AI collaboration is increasingly important in computer-supported teamwork research [6].

Much of the current research recognizes the value of AI adoption in teamwork. For example, Seeber et al. [12] discuss the future design strategies for human–AI team collaboration in various areas. Schelble et al. [9] explore dimensions of ethics in human–AI collaboration and try to explore the effect of trust-repair strategies on trust and team performance. Hauptman et al. [2] propose design suggestions for the progress of adaptive autonomous teammates. Despite an increasing focus on design strategies, little research has fully investigated the importance of human feeling in human–AI collaboration. The purpose of adopting AI technologies in team collaboration is to provide better and more efficient work performance. Therefore, whether a human can trust AI as independent teammates should draw the attention of researchers. Given that trust is an important concern in teamwork, this study aims to investigate the factors that influence human trust toward AI teammates and their cooperative intention.

2.2. Trust

The significance of trust has been extensively addressed in prior studies [781113]. Trust indicates to what extent a party is willing to be vulnerable to another party’s actions based on the positive expectations from the trustor [14]. It has been established that trust-building is essential in virtual teams [8]. Consistent with the previous definition of trust [15], we define trust in AI teammates as a human’s attitude to be willing to be vulnerable to the actions of AI teammates on the expectation that they will perform important actions for the human teammates. Recent work has established that the use of machine teammates can change the trust relationship among teammates [12]. The joining of AI teammates in a team may affect how we trust teammates, especially when we tend to accept the recommendations provided by AI machines rather than human teammates. Trust in AI teammates has similarities and differences with traditional trust relationships in team collaboration. Similar to traditional teamwork, trust in AI teammates indicates the trust perceptions generated from the interaction among teammates. However, different from traditional teamwork, trust in AI teammates is more complex because it reflects the interactions between humans and AI machines. Therefore, a trust relationship in traditional teamwork indicates interpersonal trust, while trust in AI teammates goes further by involving trust in technology.

Compared with the trust relationship among human teammates, the trust relationship between humans and AI teammates may be influenced by different factors, such as the interaction between humans and AI machines. Though trust is fundamental to determining human behavior, including the decisions about technology adoption [712], researchers have not treated the trust relationship between human and AI teammates in much detail. This study aims to explore the crucial role of trust in linking influencing factors of trust and human collaboration intention. We propose that the trust relationship between humans and AI teammates could be influenced by the interaction between human and AI machines. Beyond this, environmental factors, such as external conditions and the influence of peer groups, can influence human feelings in human–AI teamwork. Personal characteristics, such as self-efficacy on AI technology use, can also affect people’s experiences in human–AI cooperation.

3. Theoretical Model and Hypotheses

Behavioral reasoning theory is a broad theory explaining the motives of human behaviors [16]. Behavioral reasoning theory can explain the antecedents of a specific behavior by including the factors of adopting or resisting reasons [16]. Previous studies have used behavioral reasoning theory to explain the adoption of new technology [17]. This investigation adopts behavioral reasoning theory as a broad theory to understand the behavior of human–AI collaboration adoption in the context of online games. Trust is at the core of understanding behavioral intentions in explaining new technology adoption [7]. This study links the antecedents of AI technology use, trust, and intention to cooperate with AI teammates. The antecedents are divided into three kinds, including interactive characteristics, environmental characteristics, and personal characteristics.

3.1. The Influencing Factors of Trust

3.1.1. Interactive Characteristics

As discussed earlier, trust is influenced by the interaction between human and AI teammates. The interaction between human and AI teammates mainly involves three crucial characteristics, i.e., perceived anthropomorphism, perceived rapport, and perceived enjoyment [181920]. Perceived anthropomorphism refers to the extent that humans tend to label AI teammates as actual human beings and seek emotional assistance during human–AI collaboration [2122]. During human–AI team collaboration, people develop relationships and emotional connections with AI teammates during regular communications [23]. Existing research recognizes the critical role played by perceived anthropomorphism as an important factor in determining people’s attitudes [21]. For AI machines with higher anthropomorphism, people tend to show feelings toward them as they interact with human teammates. Therefore, people can be more willing to trust AI teammates with higher perceived anthropomorphism because the interaction between them is closer to the cooperation between a human and another human.

Consistent with the previous study, perceived rapport refers to the personal connection between human and AI teammates in this investigation [24]. Human–AI rapport is important for human–AI collaboration because the cooperation is toward the same goal. Evidence suggests that user–robot rapport is among the most important factors for customers’ hospitality experience [19]. In the context of human–AI team collaboration, the collaboration experience will benefit from a higher perceived rapport. When people build a good relationship of rapport with AI teammates, they will tend to believe that the AI teammates can be trusted in the collaboration interaction.

Perceived enjoyment indicates the extent to which the interaction with AI teammates is contemplated to be pleasurable [25]. The experience of interacting with AI teammates is quietly different from communications with humans. AI-related technology can provide enjoyment in human–AI interactions [20]. In the context of human–AI team collaboration, people who feel enjoyment in an interaction with AI machines will be more willing to build a trust relationship with AI teammates. Therefore, based on the above arguments, the following hypotheses are proposed:

H1.

Perceived anthropomorphism has a positive effect on trust in AI teammates.

H2.

Perceived rapport has a positive effect on trust in AI teammates.

H3.

Perceived enjoyment has a positive effect on trust in AI teammates.

3.1.2. Environmental Characteristics

People’s feelings or perceptions about human–AI teamwork can be influenced by environmental factors, such as peer influence and facilitating conditions. Adapted from previous studies [14], peer influence indicates to what extent people’s feelings in human–AI collaboration are influenced by peers, such as family members, friends, and colleagues. In the context of online games, peer influence refers to the adoption of AI teammates from family members, friends, or colleagues. The originality of new behavior could be derived from observing and imitating others [26]. When peers show positive feelings toward a new technology, people’s feelings about that new technology will tend to be aligned with their peers. In the context of human–AI collaboration, peers’ feelings about AI teammates will positively affect people’s trust perceptions. The people whose peers have more positive feelings about AI technology use will be more likely to trust in AI teammates.

Facilitating conditions refer to the resource factors that assist the usage of AI machines in assisting human–AI collaboration [2728]. In the context of online games, facilitating conditions involve the assistance offered by the online game platform. Previous studies have addressed the role of facilitating conditions in assisting the adoption of new technologies [2930]. During human–AI teamwork, we hypothesize that as an important environmental factor, facilitating conditions will assist people’s use of AI machines and raise their trust perceptions in AI teammates. Based on the above arguments, we hypothesize the following hypotheses:

H4.

Peer influence has a positive effect on trust in AI teammates.

H5.

Facilitating conditions has a positive effect on trust in AI teammates.

3.1.3. Personal Characteristics

Self-efficacy refers to confidence in one’s own ability in using AI technology in human–AI collaboration [31]. The crucial role of self-efficacy has been widely discussed in investigating new technology adoption. For example, Rahman et al. [32] found that healthcare technology self-efficacy positively influenced people’s attitudes toward health technologies usage. Jussupow et al. [33] indicate that diagnostic self-efficacy affects sensemaking processes in using AI systems. The relationship with self-efficacy has been verified in existing research [34]. Furthermore, people with higher self-efficacy are more willing to show positive attitudes toward new technologies. In the context of human–AI collaboration, people with higher self-efficacy in using AI technology will tend to believe that AI teammates can be trusted to cooperate for the same goal and are inclined to cooperate with AI teammates. Therefore, the following hypotheses are proposed:

H6.

Self-efficacy has a positive effect on trust in AI teammates.

H7.

Self-efficacy has a positive effect on intention to cooperate with AI teammates.

3.2. Trust and Intention to Cooperate with AI Teammates

Trust is one of the most important factors in explaining technology-adoption behavior, including AI adoption [35]. Trust also has been widely explored in virtual team collaboration [8]. The purpose of human–AI team collaboration is designed to take advantage of AI machines and new technology to facilitate teamwork. During team collaboration, people’s trust in teammates signifies that they have intentions to rely on their teammates even if there exists uncertainty or potential loss. In the context of human–AI collaboration, people’s trust in AI teammates means that they are willing to rely on their AI teammates in accomplishing teamwork. We hypothesize that people with higher trust in AI teammates are more willing to cooperate with AI teammates. Therefore, the following hypothesis is presented:

H8.

Trust in AI teammates has a positive effect on intention to cooperate with AI teammates.

Figure 1 presents the research model of human–AI collaboration.

image

Figure 1. The research model of human–AI collaboration.

4. Research Method

4.1. Data Collection

An online survey method was adopted to obtain the sample data. A research invitation that involves a link to the e-questionnaire was distributed to the target sample through social network sites such as WeChat. As a non-interventional study, all participants in this study were fully informed that the anonymity of all personal information would be assured, the research would be conducted for the purpose of academic research rather than commercial use, and their data would be used without any predictable risk. All participants were told that the completion of the online questionnaire indicates that they agreed with the consent to analyze their data in this study. All respondents were required to have a basic knowledge of the multiplayer online games, such as League of Legends, Honor of Kings, Crossfire, and World of Warcraft. We conducted a round of pilot study to test the reliability and validity. The reliability and validity of all scales were found to be acceptable. Then, a formal data collection was performed. We received 500 responses. We removed the samples that fail in attention-check questions. Finally, we received 423 valid responses for further analysis. As presented in Table 1, most of the respondents (73.3%) to the survey are aged between 20 and 30 years. The respondents comprise 55.8% males and 44.2% females. Most respondents (83.0%) have received an education of a 3- or 4-year college.

Table 1. Demographic characteristics of respondents (n = 423).

4.2. Survey Development

For better validity, we adapted established scales from previous studies in measuring all constructs. Our primary construct, trust in AI teammates, was measured using three items adapted from Holten [13] and Pavlou and Gefen [36]. Another primary construct, the intention to cooperate with AI teammates, was tested using three items from Lim [37]. Drawing on Guido and Peluso [22] and Fernandes and Oliveira [24], perceived anthropomorphism was measured by two items. Perceived rapport was tested using three items adapted from Fernandes and Oliveira [24]. Two items adapted from Agarwal and Karahanna [38] were used to measure perceived enjoyment. Peer influence was tested using three items from Herath and Rao [39] and Carlson and Zmud [40]. Three items adapted from Thompson, Higgins, Howell [41] and Van Doorn et al. [28] were used to assess facilitating conditions. Self-efficacy was measured by three items sourced from Hua et al. [31]. All these scales were measured by a 7-point Likert scale on which 1 indicates strongly disagree and 7 means strongly agree. Table 2 presents the detailed questions and item scales.

Table 2. Measurement items.

5. Data Analysis

5.1. Measurement Model

ADANCO 2.3.1, a software that is used for variance-based SEM, was employed to test the research model and hypothesized relationships. All of the variables in our research model are reflective. We tested the reflective measurement models by internal consistency, convergent validity, and discriminant validity [42]. Internal consistency was measured by composite reliability (CR) and Cronbach’s alpha. As presented in Table 3, all values of CR and Cronbach’s alpha were higher than the suggested threshold of 0.707, indicating good internal consistency reliability. Convergent validity was tested using outer loadings and average variance extracted (AVE). Table 3 presents that all outer loadings were higher than 0.7. All AVE values were above 0.5, which indicates that all variables explain more than 50% of the variance of their indicators. The values of AVE and factor loadings show that the measurement model has good convergent validity.

Table 3. Descriptive statistics.

image

Discriminant validity was tested using the Fornell–Larcker criterion and cross-loadings [42]. According to the Fornell–Larcker criterion, the values of AVE should be higher than squared correlations with other variables. As presented in Table 4, the results show that all AVE values in the diagonal exceed the squared correlation with any other variable. According to the discriminant validity tested by cross-loadings, all indicators’ outer loading on their associated variables should exceed their loadings on any other variable. Table 5 presents the results of cross-loadings, indicating that discriminant validity is not a concern in this study. Table 6 shows the inter-construct correlations. All correlations were less than 0.9. The results indicate that common method bias is not a major concern in this investigation.

Table 4. Discriminant validity evaluation based on the Fornell–Larcker criterion.

Table 5. Factor loadings and cross-loadings.

Table 6. Inter-construct correlations.

5.2. Structural Model and Hypothesis Testing

We performed bootstrap for inference statistics. To test the research model and hypothesized relationships, we assessed the path coefficients, the significance of path coefficients, and the coefficient of determination (R2 value). R2 indicates the goodness of fit and shows the share of variance explained by independent variables in a dependent construct. The R2 values for trust in AI teammates and intention to cooperate with AI teammates are 0.755 and 0.753 respectively. Considering the research in human–AI collaboration, the R2 values in the current study seem to be excellent.

Table 7 shows the results of hypothesized relationships. Perceived anthropomorphism has no significant effect on trust in AI teammates (β = 0.034 p > 0.05), indicating that H1 is not supported. Perceived rapport positively influences trust in AI teammates (β = 0.193 p < 0.01), and thus H2 is supported. Perceived enjoyment is positively related to trust in AI teammates (β = 0.122 p < 0.05). Thus, H3 is also supported. The results indicate that perceived rapport and enjoyment are relatively human–AI interactive characteristics in human–AI collaboration, rather than perceived anthropomorphism. This result may be explained by the fact that people can clearly recognize the difference between machines and people. The purpose of human–AI collaboration is to feel experiences that should be strongly different from traditional human-human collaboration.

Table 7. Hypothesis testing results.

In addition, peer influence positively affects trust in AI teammates (β = 0.165 p < 0.05), which indicates that H4 is supported. The result means that the use of AI technology among peers will directly influence people’s trust perceptions of AI teammates. When surrounded by peers who use AI technology, people will tend to trust their AI teammates. Facilitating conditions positively influence trust in AI teammates (β = 0.295 p < 0.001), and thus H5 is also supported. The hypothesized relationship indicates that the external conditions that facilitate AI technology use will induce people’s trust in AI teammates.

Self-efficacy is positively related to trust in AI teammates (β = 0.165 p < 0.01) and intention to cooperate with AI teammates (β = 0.227 p < 0.001), indicating that H6 and H7 are supported. The results reveal the crucial role of self-efficacy in AI technology use in determining human–AI collaboration. As a new technology used in team collaboration, people’s confidence in using AI technology will help them to trust in AI teammates and collaborate with AI teammates.

Furthermore, trust in AI teammates positively influences intention to cooperate with AI teammates (β = 0.683 p < 0.001), which indicates that H8 is statistically supported. The result signifies that trust is a determinant factor in influencing people’s collaboration intention to work with AI teammates.

6. Discussion

6.1. Summary of Findings

The main goal of this study is to determine what contributes to human–AI collaboration. For this purpose, this investigation develops and empirically tests a research model which involves the influencing mechanisms of trust and intention to cooperate with AI teammates in human–AI collaboration. First, to answer the first research question (i.e., what and how factors influence trust in AI teammates and intention to cooperate with AI team-mates?), we propose that there were three important characteristics in influencing trust in AI teammates and intention to cooperate with AI teammates, including interactive characteristics, external characteristics, and personal characteristic. By developing and assessing the research model of human–AI collaboration, this study shows that as interactive characteristics, perceived rapport and enjoyment will positively affect people’s trust in AI teammates.

As interactive characteristics, perceived enjoyment and perceived rapport positively affect trust in AI teammates. The findings are consistent with previous research that identifies the crucial role of interactions in addressing trust in human–AI teams [10] and perceived enjoyment will facilitate positive intentions [43]. For interactive characteristics, one unanticipated finding is that the relationship between perceived anthropomorphism and trust in AI teammates is not supported. This result may be explained by the fact that the interaction between human and AI teammates in multiplayer online games is mainly focused on action coordination, rather than verbal communication and other figurative expressions. There is currently a limited amount of communication that game platforms can offer between real players and their AI teammates [6]. It is important for real players whether AI teammates can understand their goals, quests, and actions so that they can act accordingly and win the match. In this process, it is less important whether the AI teammate is like a “real person” or not.

As external characteristics, peer influence and facilitating conditions positively relate to trust in AI teammates. Previous research suggests the effect of facilitating conditions and social influence on behavioral intention in acceptance of IT [44]. This study further verifies the role of peer influence and facilitating conditions in determining behavioral intentions by affecting trust in AI teammates in the context of online games. As a personal characteristic, self-efficacy positively affects trust and intention to cooperate with AI team-mates.

Second, to answer the second research question (i.e., how do people’s trust in AI teammates influence their intention to cooperate with AI teammates?), the results verify the positive relationship between trust in AI teammates and intention to cooperate with AI teammates. This finding is consistent with previous research [835].

6.2. Theoretical and Practical Implications

Our study presents the following contributions to the human–AI collaboration literature. First, this study has contributions to online game literature. Recent studies have focused on online game strategy and addiction [4546], rather than the adoption of new technologies in the context of online games. The adoption of AI technologies in online games has not been fully investigated, though the employment of human–AI collaboration has drawn the attention of practitioners. This work contributes to online game literature by emphasizing the customers’ view of cooperating with AI teammates in the context of multiplayer online games.

Second, this study lays the groundwork for future research into relationships in human–AI collaboration. Previous studies have emphasized the design strategies of human–AI teamwork [212], while this study endeavors to understand the relationships generated during human–AI interaction. Few studies have attempted to comprehensively assess the factors that may influence the adoption and performance of human–AI collaboration. This study proposes a relatively comprehensive research model in explaining people’s trust perceptions and intention to cooperate with AI teammates.

Third, this study adds to the team collaboration literature by identifying the influencing factors of trust perceptions and intention to cooperate with AI teammates. This study establishes a quantitative framework for detecting the role of interactive characteristics, external characteristics, and personal characteristics in determining human–AI collaboration. The current understanding of human–AI collaboration is still limited. The framework and research model hypothesized in this investigation supports a theoretical process of human–AI collaboration decision. The quantitative results indicate that the human–AI collaboration decision is determined by perceived rapport and enjoyment of AI machines, the influence from peers and facilitating conditions, and users’ self-efficacy in using AI technologies. These findings will assist future research into human–AI collaboration.

This work also shows practical implications. Despite the fact that human–AI collaboration has become an irreversible trend, limited practical implications have been shared with the management of AI technologies in improving customers’ adoption and trust perceptions of AI teammates. This investigation verify the significance of several factors of trust in AI teammates that are associated with their intention to cooperate with AI teammates. Managers who recognize that perceived rapport and enjoyment of using AI machines positively influence human–AI collaboration should supervise and urge the design of AI teammates toward higher and better relationships and enjoyment experience. In addition, the managers should realize the significance of external factors and provide an instruction manual for using AI machines in teamwork. A special department should be set up to help users who encounter difficulties in human-computer interaction. Moreover, training about using AI technologies should be provided at the beginning of human–AI collaboration to enhance users’ self-efficacy. Moreover, the result shows that perceived anthropomorphism is not significant in influencing trust in human–AI teammates. Thus, more effort should be invested to improve the rapport and enjoyment during human–AI interaction, rather than the anthropomorphic design of AI machines.

6.3. Limitations and Future Directions

As for any study, this work is subject to some limitations. First, this study was conducted in the context of multiplayer online games. The results of this study should be further tested in other human–AI collaboration contexts for the generalizability. Second, similar to much behavioral research, this work investigates behavioral intention, rather than the actual intention. With the increasing adoption of AI technologies in online games, future research is encouraged to investigate the relationship between users’ behavioral intention and the actual behavior of adopting human–AI collaboration.

Author Contributions

Conceptualization, K.H. and T.H.; methodology, K.H.; software, K.H.; validation, K.H., T.H. and L.C.; formal analysis, K.H.; investigation, K.H. and L.C.; resources, K.H.; data curation, K.H.; writing—original draft preparation, K.H. and T.H.; writing—review and editing, L.C.; visualization, T.H.; supervision, L.C.; project administration, K.H.; funding acquisition, K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Undergraduate Teaching Quality and Teaching Reform Project of Guangdong Province, with Letter from Higher Education Office of Guangdong Provincial Department of Education [2023] No. 4, the College Youth Innovation Talent Project of Guangdong Province, China, with Grant Number 2022WQNCX099, the Higher Education Research Project sponsored by Guangdong Higher Education Academy, with Grant Number 22GQN14, the Teaching and Research Project of Guangzhou Xinhua University, with Grant Number 2022J036.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, D.; Mishra, S.; Brynjolfsson, E.; Etchemendy, J.; Ganguli, D.; Grosz, B.; Lyons, T.; Manyika, J.; Niebles, J.C.; Sellitto, M.; et al. The AI Index 2022 Annual Report. AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University. March 2022. Available online: https://aiindex.stanford.edu/report/ (accessed on 10 March 2022).
  2. Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. Comput. Hum. Behav. 2023, 138, 107451. [Google Scholar] [CrossRef]
  3. McNeese, N.J.; Demir, M.; Cooke, N.J.; Myers, C. Teaming with a synthetic teammate: Insights into human-autonomy teaming. Hum. Factors 2018, 60, 262–273. [Google Scholar] [CrossRef]
  4. Wang, D.; Weisz, J.D.; Muller, M.; Ram, P.; Geyer, W.; Dugan, C.; Tausczik, Y.; Samulowitz, H.; Gray, A. Human-AI collaboration in data science: Exploring data scientists’ perceptions of automated AI. In Proceedings of the ACM on Human-Computer Interaction, Glasgow, UK, 4–9 May 2019. [Google Scholar]
  5. Liu, H.; Lai, V.; Tan, C. Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. In Proceedings of the ACM on Human-Computer Interaction, Online, 8–13 May 2021. [Google Scholar]
  6. Zhang, R.; McNeese, N.J.; Freeman, G.; Musick, G. “An ideal human” expectations of AI teammates in human-AI teaming. In Proceedings of the ACM on Human-Computer Interaction, Online, 8–13 May 2021. [Google Scholar]
  7. Ho, S.M.; Ocasio-Velázquez, M.; Booth, C. Trust or consequences? Causal effects of perceived risk and subjective norms on cloud technology adoption. Comput. Secur. 2017, 70, 581–595. [Google Scholar] [CrossRef]
  8. Zakaria, N.; Yusof, S.A.M. Crossing cultural boundaries using the internet: Toward building a model of swift trust formation in global virtual teams. J. Int. Manag. 2020, 26, 100654. [Google Scholar] [CrossRef]
  9. Schelble, B.G.; Lopez, J.; Textor, C.; Zhang, R.; McNeese, N.J.; Pak, R.; Freeman, G. Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming. Hum. Factors 2022, 00187208221116952. [Google Scholar] [CrossRef]
  10. Moussawi, S.; Koufaris, M.; Benbunan-Fich, R. How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electron. Mark. 2021, 31, 343–364. [Google Scholar] [CrossRef]
  11. Chi, O.H.; Jia, S.; Li, Y.; Gursoy, D. Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Comput. Hum. Behav. 2021, 118, 106700. [Google Scholar] [CrossRef]
  12. Seeber, I.; Bittner, E.; Briggs, R.O.; De Vreede, T.; De Vreede, G.J.; Elkins, A.; Maier, R.; Merz, A.B.; Oeste-Reiß, S.; Randrup, N.; et al. Machines as teammates: A research agenda on AI in team collaboration. Inf. Manag. 2020, 57, 103174. [Google Scholar] [CrossRef]
  13. Holten, R. Trust in sharing encounters among millennials. Inf. Syst. J. 2019, 29, 1083–1119. [Google Scholar] [CrossRef]
  14. Ozdemir, S.; Zhang, S.; Gupta, S.; Bebek, G. The effects of trust and peer influence on corporate brand—Consumer relationships and consumer loyalty. J. Bus. Res. 2020, 117, 791–805. [Google Scholar] [CrossRef]
  15. Körber, M. Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume VI: Transport Ergonomics and Human Factors (TEHF), Aerospace Human Factors and Ergonomics 20; Springer International Publishing: Cham, Switzerland, 2019; pp. 13–30. [Google Scholar]
  16. Mariani, M.M.; Perez-Vega, R.; Wirtz, J. AI in marketing, consumer research and psychology: A systematic literature review and research agenda. Psychol. Mark. 2022, 39, 755–776. [Google Scholar] [CrossRef]
  17. Huang, Y.; Qian, L. Understanding the potential adoption of autonomous vehicles in China: The perspective of behavioral reasoning theory. Psychol. Mark. 2021, 38, 669–690. [Google Scholar] [CrossRef]
  18. Li, M.; Suh, A. Anthropomorphism in AI-enabled technology: A literature review. Electron. Mark. 2022, 32, 2245–2275. [Google Scholar] [CrossRef]
  19. Qiu, H.; Li, M.; Shu, B.; Bai, B. Enhancing hospitality experience with service robots: The mediating role of rapport building. J. Hosp. Mark. Manag. 2020, 29, 247–268. [Google Scholar] [CrossRef]
  20. Kahn, B.E.; Inman, J.J.; Verhoef, P.C. Introduction to special issue: Consumer response to the evolving retailing landscape. J. Assoc. Consum. Res. 2018, 3, 255–259. [Google Scholar] [CrossRef]
  21. Mishra, A.; Shukla, A.; Sharma, S.K. Psychological determinants of users’ adoption and word-of-mouth recommendations of smart voice assistants. Int. J. Inf. Manag. 2022, 67, 102413. [Google Scholar] [CrossRef]
  22. Guido, G.; Peluso, A.M. Brand anthropomorphism: Conceptualization, measurement, and impact on brand personality and loyalty. J. Brand Manag. 2015, 22, 1–19. [Google Scholar] [CrossRef]
  23. Hur, J.D.; Koo, M.; Hofmann, W. When temptations come alive: How anthropomorphism undermines self-control. J. Consum. Res. 2015, 42, 340–358. [Google Scholar] [CrossRef]
  24. Fernandes, T.; Oliveira, E. Understanding consumers’ acceptance of automated technologies in service encounters: Drivers of digital voice assistants adoption. J. Bus. Res. 2021, 122, 180–191. [Google Scholar] [CrossRef]
  25. Pillai, R.; Sivathanu, B.; Dwivedi, Y.K. Shopping intention at AI-powered automated retail stores (AIPARS). J. Retail. Consum. Serv. 2020, 57, 102207. [Google Scholar] [CrossRef]
  26. Hou, T.; Hou, K.; Wang, X.; Luo, X.R. Why I give money to unknown people? An investigation of online donation and forwarding intention. Electron. Commer. Res. Appl. 2021, 47, 101055. [Google Scholar] [CrossRef]
  27. Teo, T. The impact of subjective norm and facilitating conditions on pre-service teachers’ attitude toward computer use: A structural equation modeling of an extended technology acceptance model. J. Educ. Comput. Res. 2009, 40, 89–109. [Google Scholar] [CrossRef]
  28. Van Doorn, J.; Mende, M.; Noble, S.M.; Hulland, J.; Ostrom, A.L.; Grewal, D.; Petersen, J.A. Domo arigato Mr. Roboto: Emergence of automated social presence in organizational frontlines and customers’ service experiences. J. Serv. Res. 2017, 20, 43–58. [Google Scholar] [CrossRef]
  29. Park, S.H.S.; Lee, L.; Yi, M.Y. Group-level effects of facilitating conditions on individual acceptance of information systems. Inf. Technol. Manag. 2011, 12, 315–334. [Google Scholar] [CrossRef]
  30. Peñarroja, V.; Sánchez, J.; Gamero, N.; Orengo, V.; Zornoza, A.M. The influence of organisational facilitating conditions and technology acceptance factors on the effectiveness of virtual communities of practice. Behav. Inf. Technol. 2019, 38, 845–857. [Google Scholar] [CrossRef]
  31. Hua, Y.; Cheng, X.; Hou, T.; Luo, R. Monetary rewards, intrinsic motivators, and work engagement in the IT-enabled sharing economy: A mixed-methods investigation of Internet taxi drivers. Decis. Sci. 2020, 51, 755–785. [Google Scholar] [CrossRef]
  32. Rahman, M.S.; Ko, M.; Warren, J.; Carpenter, D. Healthcare Technology Self-Efficacy (HTSE) and its influence on individual attitude: An empirical study. Comput. Hum. Behav. 2016, 58, 12–24. [Google Scholar] [CrossRef]
  33. Jussupow, E.; Spohrer, K.; Heinzl, A. Radiologists’ usage of diagnostic AI systems: The role of diagnostic self-efficacy for sensemaking from confirmation and disconfirmation. Bus. Inf. Syst. Eng. 2022, 64, 293–309. [Google Scholar] [CrossRef]
  34. Kim, Y.H.; Kim, D.J.; Hwang, Y. Exploring online transaction self-efficacy in trust building in B2C e-commerce. J. Organ. End User Comput. (JOEUC) 2009, 21, 37–59. [Google Scholar] [CrossRef]
  35. Bedué, P.; Fritzsche, A. Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. J. Enterp. Inf. Manag. 2022, 35, 530–549. [Google Scholar] [CrossRef]
  36. Pavlou, P.A.; Gefen, D. Building effective online marketplaces with institution-based trust. Inf. Syst. Res. 2004, 15, 37–59. [Google Scholar] [CrossRef]
  37. Lim, K.H.; Sia, C.L.; Lee, M.K.; Benbasat, I. Do I trust you online, and if so, will I buy? An empirical study of two trust-building strategies. J. Manag. Inf. Syst. 2006, 23, 233–266. [Google Scholar] [CrossRef]
  38. Agarwal, R.; Karahanna, E. Time flies when you’re having fun: Cognitive absorption and beliefs about information technology usage. MIS Q. 2000, 24, 665–694. [Google Scholar] [CrossRef]
  39. Herath, T.; Rao, H.R. Encouraging information security behaviors in organizations: Role of penalties, pressures and perceived effectiveness. Decis. Support Syst. 2009, 47, 154–165. [Google Scholar] [CrossRef]
  40. Carlson, J.R.; Zmud, R.W. Channel expansion theory and the experiential nature of media richness perceptions. Acad. Manag. J. 1999, 42, 153–170. [Google Scholar] [CrossRef]
  41. Thompson, R.L.; Higgins, C.A.; Howell, J.M. Personal computing: Toward a conceptual model of utilization. MIS Q. 1991, 15, 125–143. [Google Scholar] [CrossRef]
  42. Hair, J.F.; Hult, G.T.M.; Ringle, C.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  43. Ezer, N.; Bruni, S.; Cai, Y.; Hepenstal, S.J.; Miller, C.A.; Schmorrow, D.D. Trust engineering for human-AI teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Seattle, WA, USA, 28 October–1 November 2019; SAGE Publications: Los Angeles, CA, USA, 2019; Volume 63, No. 1. pp. 322–326. [Google Scholar]
  44. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  45. Mishra, S.; Malhotra, G. The gamification of in-game advertising: Examining the role of psychological ownership and advertisement intrusiveness. Int. J. Inf. Manag. 2021, 61, 102245. [Google Scholar] [CrossRef]
  46. Salehi, E.; Fallahchai, R.; Griffiths, M. Online addictions among adolescents and young adults in Iran: The role of attachment styles and gender. Soc. Sci. Comput. Rev. 2022, 41, 554–572. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Share and Cite

MDPI and ACS Style

Hou, K.; Hou, T.; Cai, L. Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games. Systems 2023, 11, 217. https://doi.org/10.3390/systems11050217

AMA Style

Hou K, Hou T, Cai L. Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games. Systems. 2023; 11(5):217. https://doi.org/10.3390/systems11050217

Chicago/Turabian Style

Hou, Keke, Tingting Hou, and Lili Cai. 2023. "Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games" Systems 11, no. 5: 217. https://doi.org/10.3390/systems11050217

Find Other Styles

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Citations

No citations were found for this article, but you may check on Google Scholar

Article Access Statistics

Created with Highcharts 4.0.4Article access statisticsArticle Views2. Jun3. Jun4. Jun5. Jun6. Jun7. Jun8. Jun9. Jun10. Jun11. Jun12. Jun13. Jun14. Jun15. Jun16. Jun17. Jun18. Jun19. Jun20. Jun21. Jun22. Jun23. Jun24. Jun25. Jun26. Jun27. Jun28. Jun29. Jun30. Jun1. Jul2. Jul3. Jul4. Jul5. Jul6. Jul7. Jul8. Jul9. Jul10. Jul11. Jul12. Jul13. Jul14. Jul15. Jul16. Jul17. Jul18. Jul19. Jul20. Jul21. Jul22. Jul23. Jul24. Jul25. Jul26. Jul27. Jul28. Jul29. Jul30. Jul31. Jul1. Aug2. Aug3. Aug4. Aug5. Aug6. Aug7. Aug8. Aug9. Aug10. Aug11. Aug12. Aug13. Aug14. Aug15. Aug16. Aug17. Aug18. Aug19. Aug20. Aug21. Aug22. Aug23. Aug24. Aug25. Aug26. Aug27. Aug28. Aug29. Aug30. Aug0250500750100012501500

For more information on the journal statistics, click here.