Clemson University Clemson University
TigerPrints TigerPrints
All Dissertations Dissertations
5-2023
How to Make Agents and In4uence Teammates: Understanding How to Make Agents and In4uence Teammates: Understanding
the Social In4uence AI Teammates Have in Human-AI Teams the Social In4uence AI Teammates Have in Human-AI Teams
Christopher Flathmann
Clemson University
Follow this and additional works at: https://tigerprints.clemson.edu/all_dissertations
Recommended Citation Recommended Citation
Flathmann, Christopher, "How to Make Agents and In4uence Teammates: Understanding the Social
In4uence AI Teammates Have in Human-AI Teams" (2023).
All Dissertations
. 3339.
https://tigerprints.clemson.edu/all_dissertations/3339
This Dissertation is brought to you for free and open access by the Dissertations at TigerPrints. It has been
accepted for inclusion in All Dissertations by an authorized administrator of TigerPrints. For more information,
please contact [email protected].
How to Make Agents and Influence Teammates:
Understanding the Social Influence AI
Teammates Have in Human-AI Teams
A Dissertation
Presented to
the Graduate School of
Clemson University
In Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy
Human Centered Computing
by
Christopher Flathmann
May 2023
Accepted by:
Dr. Nathan McNeese, Committee Chair
Dr. Brian Dean
Dr. Eileen Kraemer
Dr. Brygg Ulmer
Dr. Laine Mears
Abstract
The introduction of computational systems in the last few decades has enabled
humans to cross geographical, cultural, and even societal boundaries. Whether it was
the invention of telephones or file sharing, new technologies have enabled humans to
continuously work better together. Artificial Intelligence (AI) has one of the highest
levels of potential as one of these technologies. Although AI has a multitude of
functions within teaming, such as improving information sciences and analysis, one
specific application of AI that has become a critical topic in recent years is the creation
of AI systems that act as teammates alongside humans, in what is known as a human-
AI team.
However, as AI transitions into teammate roles they will garner new respon-
sibilities and abilities, which ultimately gives them a greater influence over teams’
shared goals and resources, otherwise known as teaming influence. Moreover, that
increase in teaming influence will provide AI teammates with a level of social influ-
ence. Unfortunately, while research has observed the impact of teaming influence by
examining humans’ perception and performance, an explicit and literal understand-
ing of the social influence that facilitates long-term teaming change has yet to be
created. This dissertation uses three studies to create a holistic understanding of the
underlying social influence that AI teammates possess.
Study 1 identifies the fundamental existence of AI teammate social influence
ii
and how it pertains to teaming influence. Qualitative data demonstrates that social
influence is naturally created as humans actively adapt around AI teammate teaming
influence. Furthermore, mixed-methods results demonstrate that the alignment of AI
teammate teaming influence with a human’s individual motives is the most critical
factor in the acceptance of AI teammate teaming influence in existing teams.
Study 2 further examines the acceptance of AI teammate teaming and social
influence and how the design of AI teammates and humans’ individual differences can
impact this acceptance. The findings of Study 2 show that humans have the greatest
levels of acceptance of AI teammate teaming influence that is comparative to their
own teaming influence on a single task, but the acceptance of AI teammate teaming
influence across multiple tasks generally decreases as teaming influence increases.
Additionally, coworker endorsements are shown to increase the acceptance of high
levels of AI teammate teaming influence, and humans that perceive the capabilities of
technology, in general, to be greater are potentially more likely to accept AI teammate
teaming influence.
Finally, Study 3 explores how the teaming and social influence possessed by
AI teammates change when presented in a team that also contains teaming influence
from multiple human teammates, which means social influence between humans also
exists. Results demonstrate that AI teammate social influence can drive humans to
prefer and observe their human teammates over their AI teammates, but humans’
behavioral adaptations are more centered around their AI teammates than their hu-
man teammates. These effects demonstrate that AI teammate social influence, when
in the presence of human-human teaming and social influence, retains potency, but
its effects are different when impacting either perception or behavior.
The above three studies fill a currently under-served research gap in human-
AI teaming, which is both the understanding of AI teammate social influence and
iii
humans’ acceptance of it. In addition, each study conducted within this dissertation
synthesizes its findings and contributions into actionable design recommendations
that will serve as foundational design principles to allow the initial acceptance of AI
teammates within society. Therefore, not only will the research community benefit
from the results discussed throughout this dissertation, but so too will the developers,
designers, and human teammates of human-AI teams.
iv
Dedication
Dedicated to Harrison,
one of the best friends someone could have.
Hey Harrison, I simply wanted to use this time to update you on some things
that have been going on, as I haven’t gotten to in almost a year. First, Kelsea and
I are doing great. I know you always had your doubts, but I appreciate how you
always had faith and hope this would pull through, even if you didn’t get to meet
her. Second, One Piece is apparently entering the final arch, which means we should
hopefully find out what the one piece is within the next few years. The manga is
getting really good right now, and it turns out that Luffy is actually a sun god so
that’s pretty cool. Attack on Titan still hasn’t finished, and MAPPA is milking that
for every penny. Elden Ring turned out to be insanely good, and you would have
loved it. I wish you could have been here so we could have enjoyed these things
together, but I hope to be able to enjoy these things for both of us in the future. You
truly were one of the best friends someone could have had. RIP in pepperoni.
Stop counting only those things you have lost. What is gone, is gone. So ask
yourself this. What is there that still remains to you? - Eiichiro Oda, Adapted
Translation
v
Acknowledgments
First, I’d like to thank my advisor Nathan J. McNeese who has been with
me for every step of this Ph.D. Were it not for this mentorship, I would not be the
researcher or person that I am today. Whether I need help in my work or personal
life, I feel that I can always ask a question, expect an answer, and move forward
confidently when working with you. I also feel you are an advisor that grows along
your students, and you continue to learn, adapt, and grow with each new person you
welcome into our world. At the end of the day, I am happy that you are someone I
can grow alongside due to our relentless desire to push each other.
To my dissertation committee, Brian Dean, Laine Mears, Eileen Kraemer, and
Brygg Ullmer, I would like to thank your for you time and expertise throughout this
process. Moreover, I would like to thank each of you for being influential at different
points in my academic career. Brian Dean, thank you for helping me grow as an
undergraduate student, a teacher, a researcher, and a person; you have been a strong
influence for almost a decade now. Laine Mears, thank you for helping me during
the formative years of my degree by always asking how my research leaves the lab,
which has helped me become the applied researcher that I am today. Eileen Kraemer,
thank you for your continued service and expertise through a variety of challenges
that have faced me, both from a research and a degree perspective. Finally, Brygg
Ullmer, thank you for not only providing expertise but also always having the most
vi
entertaining questions to answer. Once again, I appreciate each of you in a unique
way, and this journey has been a delight because of the relationships we have been
able to build.
As an aside, I would also like to extend my gratitude to the staff at Clemson’s
School of Computing. Everyone there has been fantastic in helping me transition
degrees and jobs. I’d especially like to that Kaley Allen and Adam Rollins for making
sure I can make rent each month.
It is also important that I acknowledge my colleagues that work with me on
a daily basis. The TRACE Research Group has helped me in my day-to-day and in
my life. I feel that we have built a family within Clemson, and we can continue to
trust and push each other every day. I cannot wait to see what you all do.
I would also like to thank those in TRACE who have been with me for years
now. First, I would like to thank our former TRACE member, Lorenzo Barberis
Canonico, as he helped me gain momentum in this program. I would like to thank
Beau Schelble for essentially going through this entire program with me and always
having my back when I need it. Rui Zhang, you have been a joy to work with, and
it has been an honor to grow with you over the years. Finally, I would like to thank
Rohit Mallick for stepping up and grabbing the torch as the senior members of our
group move on to the next stages of their lives. I truly believe that not just these
specific students but all TRACE students stand to be major changes of force in the
world, and I cannot wait to see what we are able to accomplish.
To my friends, I would like to extend a very warm thank you. A lot of us have
moved one with our lives, and I’m happy to see all of you thriving. Wayne, Nancy,
Harrison, and Will, thank you for always being the first ones to show up and the last
ones to leave. Most notably, I would like to thank Wayne for watching every DCOM
with me; I thought this dissertation would be the hardest thing I ever accomplished
vii
but that run definitely gives it some competition. Wayne, I guess if you want to
finally know what I do for a job, feel free to read this document. Also, thank you
Caitlin Lancaster, for being a good roommate and friend for these final moments of
my degree. The companionship you all provided and continue to provide have made
this experience not only bearable but enjoyable.
And last but not least, I would like to thank those closest to me. I would like
to say thank you and I love you to my parents, Tod and Connie. You may not always
understand what I do, but I know you have always been there to push me forward
and support whatever random opportunity I decide to pursue. Whether it was my
undergrad, graduate degree, career pursuits, or my collection of disjointed hobbies, I
feel you have always been behind me. I could not and probably would not have done
this if it was not for the two of you.
To Zack and Rachel, I would like to thank you for helping throughout this
process by always providing some sort of distraction. Whether it is random dinner
invites, trips, or simply getting ice cream, your companionship and support has helped
alleviate whatever stress I or this job impose on me.
And finally, I would like to thank my partner, Kelsea. You truly are the main
reason I move forward at this point. You have not only helped me along through this
process, but you have also enriched my life in a way that I did not know was possible.
I cannot wait to build a life with you, and I cannot wait to help you through this
process as well.
A new age is coming. An age of daring and mighty.
And no one can turn back. - Eiichiro Oda, Translated
viii
Table of Contents
Title Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction and Overview of Dissertation . . . . . . . . . . . . . . 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Research Questions and Gaps . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Summary of Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Human-AI Teamwork . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 Human-Centered Artificial Intelligence and Designing for Artificial In-
telligence Acceptance . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Social Influence in Teamwork . . . . . . . . . . . . . . . . . . . . . . 43
2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3 Platform Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1 Rocket League . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4 Study 1: Using Teaming Influence to Create a Foundational Un-
derstanding of Social Influence in Human-AI Dyads . . . . . . . . 67
4.1 Study 1: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2 Study 1a & 1b: Task . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
ix
4.3 Study 1a & 1b: Participants and Demographics . . . . . . . . . . . . 73
4.4 Study 1a & 1b: Measurements . . . . . . . . . . . . . . . . . . . . . . 74
4.5 Study 1a: Overview and Research Questions . . . . . . . . . . . . . . 81
4.6 Study 1a: Experimental Design . . . . . . . . . . . . . . . . . . . . . 82
4.7 Study 1a: Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.8 Study 1a: Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.9 Study 1b: Overview and Research Questions . . . . . . . . . . . . . . 117
4.10 Study 1b: Qualitative Methods . . . . . . . . . . . . . . . . . . . . . 117
4.11 Study 1b: Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.12 Study 1b: Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5 Study 2: Examining the Acceptance and Nuance of AI Teammate
Teaming Influence From Both Human and AI Teammate Perspec-
tives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.1 Study 2: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.2 Study 2a: Research Questions . . . . . . . . . . . . . . . . . . . . . . 157
5.3 Study 2a: Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.4 Study 2a: Experimental Results . . . . . . . . . . . . . . . . . . . . . 166
5.5 Study 2b: Research Questions . . . . . . . . . . . . . . . . . . . . . . 175
5.6 Study 2b: Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.7 Study 2b: Experimental Results . . . . . . . . . . . . . . . . . . . . . 184
5.8 Study 2: Individual Differences Results . . . . . . . . . . . . . . . . . 194
5.9 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6 Study 3: Understanding the Creation of AI Teammate Social In-
fluence in Multi-Human Teams . . . . . . . . . . . . . . . . . . . . . 211
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
6.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.3 Study 3: Quantitative Results . . . . . . . . . . . . . . . . . . . . . . 223
6.4 Study 3: Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . 243
6.5 Study 3: Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7 Final Discussions & Conclusion . . . . . . . . . . . . . . . . . . . . . 265
7.1 Revisiting Research Questions . . . . . . . . . . . . . . . . . . . . . . 265
7.2 Contributions of the Dissertation . . . . . . . . . . . . . . . . . . . . 279
7.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
A Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
B Study 2 Vignettes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
x
List of Tables
1.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Research Gaps Being Closed By Research Questions . . . . . . . . . . 13
1.3 Studies that Address Each Research Question . . . . . . . . . . . . . 15
3.1 Potential Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1 Study 1 Participant Demographics . . . . . . . . . . . . . . . . . . . 75
4.2 Study 1 2x2 experimental design. . . . . . . . . . . . . . . . . . . . . 84
4.3 Descriptive statistics for score. . . . . . . . . . . . . . . . . . . . . . . 86
4.4 Descriptive statistics for score difference. . . . . . . . . . . . . . . . . 89
4.5 Descriptive statistics for workload. . . . . . . . . . . . . . . . . . . . 90
4.6 Descriptive statistics for perceived social influence in comparison to
the AI teammate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.1 Study 2a Demographic Information . . . . . . . . . . . . . . . . . . . 159
5.2 Study 2a Experimental Manipulations, creating a 2x2 experimental de-
sign. Manipulation 1 is a within-subjects manipulation with seven con-
ditions presented in a randomized order. Manipulation 2 is a between-
subjects manipulation with two conditions randomly assigned to par-
ticipants. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.3 Post-Scenario questions shown after each vignette. Questions were
provided a seven-point Likert scale, Strongly disagree Strongly
Agree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.4 Linear model for effects of conditions on the perceived capability of AI
to complete workload. Each model is built upon and compared to the
one listed above it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.5 Linear model for effects of conditions on potential helpfulness. Each
model is built upon and compared to the one listed above it. . . . . . 168
5.6 Linear model for effects of conditions on one’s own perceived benefit.
Each model is built upon and compared to the one listed above it. . . 169
5.7 Linear model for effects of conditions on job security. Each model is
built upon and compared to the one listed above it. . . . . . . . . . . 170
5.8 Linear model for effects of conditions on likelihood to adopt. Each
model is built upon and compared to the one listed above it. . . . . . 172
5.9 Study 2b Demographic Information . . . . . . . . . . . . . . . . . . . 177
xi
5.10 Study 2b: Tasks that need to be completed by software developers and
are assigned to teammates in surveys. . . . . . . . . . . . . . . . . . . 180
5.11 Study 2b experimental manipulations. Manipulation 1 varies the num-
ber of tasks completed by the AI teammate, and in turn the human
participant. Manipulation 2 varies the endorsement provided to en-
courage AI teammate adoption. The descriptions in Manipulation 2
are not the full bullet point list showed to participants. . . . . . . . . 181
5.12 Linear model for effects of responsibility and capability endorsement
on capability of AI teammate. Each model is built upon and compared
to the one listed above it. . . . . . . . . . . . . . . . . . . . . . . . . 185
5.13 Table of the Selected Model’s Fixed Effects of Responsibility, Capabil-
ity Endorsement Methods, and Interactions on the Perceived Capabil-
ity of the AI Teammate. Effect sizes shown for significant effects and
effects that neared significance. . . . . . . . . . . . . . . . . . . . . . 185
5.14 Linear model for effects of responsibility and capability endorsement on
helpfulness of AI teammate. Each model is built upon and compared
to the one listed above it. . . . . . . . . . . . . . . . . . . . . . . . . 187
5.15 Table of the Selected Model’s Fixed Effects of Responsibility, Capabil-
ity Endorsement, and Interactions on AI Helpfulness. Effect size only
shown for significant effects. . . . . . . . . . . . . . . . . . . . . . . . 187
5.16 Linear model for effects of responsibility and capability endorsement
on helpfulness of one’s self. Each model is built upon and compared
to the one listed above it. . . . . . . . . . . . . . . . . . . . . . . . . 188
5.17 Table of the Selected Model’s Fixed Effect of Responsibility on Help-
fulness of Self. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.18 Linear model for effects of responsibility and capability endorsement
on job security. Each model is built upon and compared to the one
listed above it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.19 Table of the Selected Model’s Fixed Effects of Responsibility and Ca-
pability Endorsement on Job Security. Effect size only shown for sig-
nificant effects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.20 Linear model for effects of responsibility and capability endorsement
on likelihood to adopt. Each model is built upon and compared to the
one listed above it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.21 Table of the Selected Model’s Fixed Effects of Responsibility, Capabil-
ity Endorsement, and Interactions on Adoption Likelihood. Effect size
only shown for significant effects. . . . . . . . . . . . . . . . . . . . . 191
5.22 Model Comparisons and Coefficients for AI Helpfulness . . . . . . . . 194
5.23 Model Comparisons and Coefficients for One’s Own Perceived Benefit 195
5.24 Model Comparisons and Coefficients for Perceived Job Security . . . 196
5.25 Model Comparisons and Coefficients for Perceived AI Capability . . . 197
5.26 Model Comparisons and Coefficients for Adoption . . . . . . . . . . . 198
xii
6.1 Study 3 2x3 experimental design. . . . . . . . . . . . . . . . . . . . . 214
6.2 Study 3 Demographic Information . . . . . . . . . . . . . . . . . . . . 217
6.3 Human-Machine-Interaction-Interdependence Subscales . . . . . . . . 220
6.4 Marginal means for the effects of AI count and teammate identity on
perceived performance. . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.5 Marginal means for the effects of AI count and teammate identity on
perceived performance. . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.6 Marginal means for the effects of AI count and teammate identity on
perceived mutual dependence. . . . . . . . . . . . . . . . . . . . . . . 227
6.7 Marginal means for the effects of training type and teammate identity
on perceived conflict. . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.8 Marginal means for the effects of number of AI teammates and team-
mate identity on perceived power compared to others. . . . . . . . . . 230
6.9 Marginal means for the effects of number of AI teammates and team-
mate identity on perceived future interdependence from the teammate
to one’s self. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
6.10 Marginal means for the effects of number of AI teammates on perceived
future interdependence from one’s self to teammates. . . . . . . . . . 234
6.11 Marginal means for the effects of teammate identity on on perceived
information certainty from one’s self to their teammates. . . . . . . . 238
6.12 Marginal means for the effects of the number of AI teammates on
perceived workload. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.13 Marginal means for the effects of the number of AI teammates on AI
teammate acceptance. . . . . . . . . . . . . . . . . . . . . . . . . . . 240
A.1 Study 1 Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . 298
A.2 Negative Attitudes Towards Agents Survey . . . . . . . . . . . . . . . 298
A.3 Disposition to Trust Artificial Teammate Survey . . . . . . . . . . . . 299
A.4 Teammate Performance Survey . . . . . . . . . . . . . . . . . . . . . 300
A.5 Teammate Trust Survey . . . . . . . . . . . . . . . . . . . . . . . . . 300
A.6 Team Effectiveness Survey . . . . . . . . . . . . . . . . . . . . . . . . 301
A.7 Team Workload Survey . . . . . . . . . . . . . . . . . . . . . . . . . . 302
A.8 Influence and Power Survey . . . . . . . . . . . . . . . . . . . . . . . 302
A.9 Artificial Teammate Acceptance Survey . . . . . . . . . . . . . . . . . 303
A.10 Study 2 Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . 304
A.11 Need for Power Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
A.12 Motivation to Lead Scale . . . . . . . . . . . . . . . . . . . . . . . . . 307
A.13 Creature of Habit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
A.14 Big Five Personality - Mini IPIP . . . . . . . . . . . . . . . . . . . . 310
A.15 Workplace Fear of Missing Out . . . . . . . . . . . . . . . . . . . . . 311
A.16 Cynical Attitudes Towards AI . . . . . . . . . . . . . . . . . . . . . . 311
A.17 General Computer Self-Efficacy . . . . . . . . . . . . . . . . . . . . . 312
xiii
A.18 Computing Technology Continuum of Perspective Scale . . . . . . . . 313
A.19 Human-Machine-Interaction-Interdependence Questionaire . . . . . . 316
B.20 Study 1 Vignette Template . . . . . . . . . . . . . . . . . . . . . . . . 317
B.21 Study 2 Example Vignette. Number of tasks changes as a within sub-
jects condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
xiv
List of Figures
1.1 Graphic displaying the underexplored role social influence plays in
human-AI teaming during human-AI interaction. A 1-human 1-AI
dyad is shown to reduce figure complexity and increase readability. . 2
1.2 Graphical Representation of Studies . . . . . . . . . . . . . . . . . . . 15
3.1 Rocket League Screen Shot . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2 RL Bot Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3 RLBot Team Size Modification . . . . . . . . . . . . . . . . . . . . . 65
4.1 Experimental Procedure for Study 1a . . . . . . . . . . . . . . . . . . 83
4.2 AI teaming influence and variability’s effect on participants’ scores dis-
playing the main effect of teaming influence (Figure 4.2a), the inter-
action effect between teaming influence level and variability (Figure
4.2b). Figures also display the three way interaction between round,
teaming influence level, and variability with Figure 4.2c showing the
low AI teaming influence condition and Figure 4.2d showing the high
AI teaming influence condition. Error bars represent 95% confidence
intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.3 Interaction effect between AI teaming influence and variability on par-
ticipants’ score difference. Error bars represent 95% confidence intervals. 89
4.4 Main effect of AI teaming influence level on participants’ perceived
workload level (Figure 4.4a) and the main effect of AI teaming influence
on the participants’ perceived level of teaming influence in comparison
to their AI teammate (Figure 4.4b). Error bars represent bootstrapped
95% confidence intervals. . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.1 Figure of the capability of AI system to complete responsibility based
on responsibility and identity. Error bars denote 95% confidence interval.167
5.2 Graph of the potential helpfulness of AI based on responsibility and
identity. Error bars denote 95% confidence interval. . . . . . . . . . 168
5.3 Graph of potential benefit of self alongside AI system based on respon-
sibility and identity. Error bars denote 95% confidence interval. . . . 169
5.4 Figure of job security when working with AI based on responsibility
and identity. Error bars denote 95% confidence interval. . . . . . . . . 170
xv
5.5 Graph of likelihood to adopt AI based on teammate responsibility and
identity. Error bars denote 95% confidence interval. . . . . . . . . . . 172
5.6 Graph of AI capability based on teammate responsibility and capability
endorsement. Error bars denote 95% confidence interval. . . . . . . . 185
5.7 Graph of helpfulness of AI based on teammate responsibility and ca-
pability endorsement. Error bars denote 95% confidence interval. . . . 187
5.8 Graph of helpfulness of self based on teammate responsibility and ca-
pability endorsement. Error bars denote 95% confidence interval. . . . 188
5.9 Graph of job security based on teammate responsibility and capability
endorsement. Error bars denote 95% confidence interval. . . . . . . . 190
5.10 Graph of likelihood to adopt AI based on teammate responsibility and
capability endorsement. Error bars denote 95% confidence interval. . 191
6.1 Figure of task performance based on the number of AI teammates and
whether or not the perception is towards the human or AI teammate.
Error bars denote 95% confidence intervals. . . . . . . . . . . . . . . . 224
6.2 Figure of perceived performance based on the number of AI teammates
and whether or not the perception is towards the human or AI team-
mate. Error bars denote 95% confidence intervals. . . . . . . . . . . . 226
6.3 Figure of perceived mutual dependence based on the number of AI
teammates and whether or not the perception is towards the human
or AI teammate. Error bars denote 95% confidence intervals. . . . . . 227
6.4 Figure of perceived conflict based on training type and whether or
not the perception is towards the human or AI teammate. Error bars
denote 95% confidence intervals. . . . . . . . . . . . . . . . . . . . . . 229
6.5 Figure of perceived power one has compared to others based on the
number of AI teammates and whether or not the perception is towards
the human or AI teammate. Error bars denote 95% confidence intervals.230
6.6 Figure of perceived future interdependence from teammates to one’s
self based on the number of AI teammates and whether or not the
perception is towards the human or AI teammate. Error bars denote
95% confidence intervals. . . . . . . . . . . . . . . . . . . . . . . . . . 232
6.7 Figure of perceived future interdependence from one’s self to team-
mates based on the number of AI teammates. Error bars denote 95%
confidence intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
6.8 Figure of perceived future interdependence from one’s self to team-
mates based on the number of AI teammates, teammate identity, and
the training participants had. Error bars denote 95% confidence intervals.236
6.9 Figure of perceived information certainty from one’s self to teammates
based on teammate identity. Error bars denote 95% confidence intervals.238
6.10 Figure of perceived workload based on the number of AI teammates.
Error bars denote 95% confidence intervals. . . . . . . . . . . . . . . . 239
xvi
6.11 Figure of AI teammate acceptance based on the number of AI team-
mates. Error bars denote 95% confidence intervals. . . . . . . . . . . 240
7.1 RQ1 Study Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.2 RQ2 Study Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 271
7.3 RQ3 Study Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.4 RQ4 Study Relationships . . . . . . . . . . . . . . . . . . . . . . . . . 277
xvii
Chapter 1
Introduction and Overview of
Dissertation
1.1 Overview
“The potato can, and usually does, play a twofold part: that of a nutritious
food, and that of a weapon ready forged for the exploitation of a weaker
group in a mixed society.” –Salaman Redcliffe, The History and Social
Influence of the Potato
“If the simple potato can have a great deal of social influence on the way
humans interact with each other and society, what is stopping AI team-
mates from doing the same?” –Christopher Tod Flathmann, How to Make
Agents and Influence Teammates
The integration of AI systems in the coming years has the opportunity to revo-
lutionize modern workforces and provide them with highly analytical systems capable
of performing actions not feasible by humans [438]. However, more than being tools
1
used by humans, the concept of AI promises the ability to create autonomous systems
that can listen, interpret, think, and act autonomously without the direct control of
human users [406, 269]. Specifically, human-AI teaming serves as a fully realized im-
plementation of this idea, as it allows AI systems to be designed, created, and viewed
as an autonomous teammate that can act interdependently with humans rather than
being used as a simplistic tool [329]. Bridging AI systems away from tooling and
towards teaming is, however, not a trivial task and may not be an entirely frictionless
experience. Importantly, new frictions toward AI teammates’ integration may arise as
the transition from tool to teammate presents a greater level of responsibility, ability,
and most of all social influence for an AI system interacting with humans. Thus, this
increase in social influence must be explicitly understood by researchers if AI team-
mates are going to be able to enter future societies and workforces. The following
chapter provides a definition and contextualization for this social influence, details
the challenges preventing us from understanding this social influence, and summarizes
the work completed by this dissertation.
Figure 1.1: Graphic displaying the underexplored role social influence plays in human-
AI teaming during human-AI interaction. A 1-human 1-AI dyad is shown to reduce
figure complexity and increase readability.
2
1.2 Problem Motivation
While human-AI teaming research has gained traction over the last several
years, the current state of said research has (1) largely ignored the critical teaming
component of social influence, while also (2) minimizing the consideration of the
initial acceptance of AI teammates in real-world settings. These two challenges are
discussed in detail below and serve as the central motivations of this dissertation.
As AI technology progresses, human-AI teaming serves as a unique application
of AI technology where AI teammates have more complex roles than simplistic tools
because the role, responsibility, and autonomy of a teammate is greater than that of
a tool [329]. This increase in role and responsibility has ultimately led AI teammates
to have greater use of shared resources to accomplish shared goals within a human-AI
team, otherwise known as having a teaming influence (Top/Bottom Purple Boxes,
Figure 1.1). The presence of this teaming influence has been shown in previous
research to heavily contribute to various teaming results, such as team performance
[329], trust [284], shared understanding [111], ethics collaboration [143] and team
cognition [390] (Right-most red box, Figure 1.1).
Despite research acknowledging the impacts of AI teammate teaming influence
on the above-mentioned teaming factors and on human teammates [480], there is
likely an unexplored secondary influence that contributes to these human-AI teaming
factors known as Social Influence (Middle Yellow box, Figure 1.1). Social influence is
defined as the “change in an individual’s thoughts, feelings, attitudes, or behaviors that
results from interaction with another individual or a group” [355]. Within human-
human teams, social influence often occurs as a result of teammates interacting with
each other’s teaming influence [240]. Based on this understanding, a representation
of how AI teaming influence potentially leads to social influence through human-AI
3
teaming interaction is shown in Figure 1.1. Despite the importance of social influence
to teaming [240], this potential representation is inherently limited as research has
yet to explicitly explore how interaction with teaming influence creates AI teammate
social influence that facilitates lasting change in humans’ behaviors and perceptions.
As such, a holistic understanding of human-AI teaming and AI teammate impacts
cannot be achieved without understanding AI teammate social influence. Moreover,
this holistic understanding is critical as all of these factors, including social influence,
will exist in real-world teams that eventually work with AI teammates.
Furthermore, not only would understanding AI teammate social influence in-
crease an understanding of human-AI teaming, but the existence of AI teammate
teaming and social influence may ultimately impact the technology’s acceptance,
meaning ignoring the concept may hinder the acceptance of the actual technology
of AI teammates. Generally, technology acceptance is often dictated by two key
factors- perceived utility and ease-of-use- which have been synthesized into the tech-
nology acceptance model (TAM) [105]. While the perceived utility has maintained
a relatively constant presence in the TAM, it has yet to be explored from the per-
spective of AI teammate teaming influence. Additionally, ease-of-use has often been
heavily updated to better accommodate new mediums, modalities, and applications
[451]. However, the existence of social influence from AI teammates means that hu-
mans will not just use AI teammates, but AI teammates will in fact use humans.
Meanwhile, the only consideration of social influence in ease-of-use and technology
acceptance is external social influence, such as peer pressure from human friends to
accept technology, and not social influence presented by the technology itself [451].
Thus, the potential acceptance of AI teammates cannot be understood without a
consideration of the role AI teammate social influence and teaming influence plays
on humans’ acceptance.
4
The above two challenges present two unique roadblocks and gaps that pre-
vent the progression of human-AI teaming: (1) a minimal understanding of how AI
teammate teaming influence becomes social influence prevents AI teammate impacts
from being holistically understood, and (2) the initial acceptance of AI teammates
may be miscalculated without an understanding of the acceptance of AI teammate
teaming and social influence. Thus, this dissertation tackles these roadblocks with
the goal of making AI teammates that are both beneficial to and accepted by human
teammates.
1.3 Research Motivation
While the above problem motivation identifies the specific practical knowledge
gaps that inspire this work, these problems must be solved through the understanding
and subsequent closure of existing research gaps. Specifically, this dissertation is a
synthesis of two research domains: Human-AI Teaming and Human-Centered AI.
Both domains provide an understanding of rapidly advancing fields of technology
that have seen a high degree of crossover in recent years. Moreover, the importance
of both AI teammate and teaming influence, and the acceptance of said teaming and
social influences, necessitates the crossover between these two domains to achieve a
holistic picture. In addition to the fields of Human-AI Teaming and Human-Centered
AI, the general research field of social influence also provides key considerations for
this dissertation.
1.3.1 Human-AI Teaming
Over the past decade, AI systems and their applications have advanced in a
variety of ways. For instance, Natural Language Processing, Decision Making, Digital
5
Assistants, and even Recommender Systems have all uniquely propelled AI as a tech-
nology forward [459]. However, one of the most promising applications of AI technol-
ogy, which will collectively build on a multitude of other AI domains, is the creation of
AI teammates. AI teammates are created to function as autonomous systems along-
side humans by leveraging their unique computational strengths to complete tasks in
real-world contexts [285]. Specifically, the definition of human-AI teams is as follows:
“interdependence in activity and outcomes involving one or more humans and one or
more autonomous agents, wherein each human and autonomous agent is recognized
as a unique team member occupying a distinct role on the team, and in which the
members strive to achieve a common goal as a collective” [329]. While the defini-
tion and concept of AI teammates has yet to see consistent real-world and applied
application, research domains have recently begun to heavily explore the concept for
future application [389]. Fortunately, past research has found immense potential in
human-AI teaming as an application of AI technology, as AI teammates are able
to utilize computational capabilities to complement potential weaknesses in human
teammates, leading to greater levels of efficiency [285]. However, the potential mag-
nitude of these efficiency gains is not always presented alongside similar increases in
perceptions of AI systems, as the trust of AI teammates is often lower than that of
human teammates [284]. Moreover, perceptual impacts are not simply isolated to
human-AI relationships but also impact the perceptions humans have for each other,
meaning human-human relationships can also be impacted by AI teammates [140].
Due to the potential misalignment of performance gains and human-factors
issues surrounding human-AI teams, a large portion of recent research has shifted to-
wards more critically examining specific human factors and human teaming concepts
in human-AI teaming, many of which are listed above [329]. However, the common
exploration of these factors simply looks at the final impact that the existence of AI
6
teammate teaming influence has on said factors without consideration for the grad-
ual change process of social influence that facilitates these results. For instance, the
interactions humans have with AI teammates have shown demonstrable impacts on
trust [284], and the same can be said for other human factors such as shared under-
standing [111] or ethics [259]. Additionally, research has only examined how external
social influences, such as organizational policy, can also impact human-AI collabo-
ration and teaming [262]. However, the social influence presented by AI systems in
human-AI teams has yet to be explored, and it has only received minor exploration in
the broader domain of human-AI interaction [184]. Thus, existing human-AI teaming
research has a blind spot in its understanding as it is agnostic of the social influence
AI teammates themselves have on human teammates, which has created the following
gap: the explicit and literal exploration of the social influence AI teammates have as
a result of their teaming influence in human-AI teams has not been explored.
The above research gap must also be expanded to a second research gap in
human-AI teaming. Specifically, the social influence of AI teammates is not going to
be the only social influence humans experience in human-AI teams, which can include
human-human and human-AI relationships. Humans already provide a great deal of
social influence in the teams they are a part of [165]. Moreover, within multi-human
human-AI teams, there is competition between different social influences, which ul-
timately diminishes the impact of an individual’s social influence and even impacts
existing relationships on teams [156]. Therefore, while creating a base understanding
of how social influence from an AI teammate will impact human teammates, this
understanding needs to be further extended to accommodate (1) existing human re-
lationships that could be impacted by this social influence and (2) the presence of
competing human influence that may weaken AI teammate social influence. Thus,
the above research gap must be extended to include: AI teammate social influence
7
has not yet been explicitly studied in contexts with existing human relationships and
teaming influence from multiple humans.
1.3.2 AI Acceptance & Human-Centered AI
As mentioned above, the TAM has historically been a critical component in
determining the potential acceptance of emerging technologies [105]. Moreover, its
relevance and importance have not faded over the years as it has been repeatedly
updated to accommodate new technologies alongside our growing understanding of
human-computer interaction [89]. Indeed, it is not unusual for a newly emerging field
of technology, such as human-AI teaming, to require updates to the TAM. Impor-
tantly, then, updating the TAM to better accommodate human-AI teaming will not
negate existing understandings of ease-of-use and perceived utility as they are still ap-
plicable to human-AI teaming and AI technologies [17, 16], but will rather widen the
TAM’s applicability to this novel human-computer interaction. For instance, the per-
ceived reliability of a system- a critical component to general technology acceptance-
is also an important factor to AI teammates as humans will need to rely on them
in teaming situations [99]. Thus, the exploration of improving human-AI interaction
from the perspective of existing factors that impact ease-of-use is critical.
Recently, in hopes of improving both perceived utility and ease-of-use, AI re-
search has shifted its focus towards the idea of designing AI to be human-centered.
Specifically, this human-centeredness is often achieved by creating design recommen-
dations for AI systems that researchers and developers can use as guidelines for build-
ing AI systems that benefit humans [22]. These include recommendations for AI
systems that consider the differences between individuals that may impact how they
perceive, accept and interact with AI [474]. These recommendations can often be
8
targeted towards increasing the perceived utility of an AI system, such as increases
in algorithmic precision and explainability and the creation of educational materials
[161, 29, 301]. However, recommendations can also specifically target the ease-of-use
of AI tools, such as the use of voice interaction, conversational speech, or visual com-
munication [449, 104, 223]. Thus, while modern research in AI tooling has commonly
tackled the concepts that help garner AI acceptance, these findings may become less
relevant due to the unique teaming and social influence imposed by AI teammates
as opposed to AI tools. Thus, the following research gap exists: AI teammate accep-
tance does not yet consider the social influence associated with AI teammates’ teaming
influence, resulting in a miscalculation of potential acceptance.
In light of recent events in 2023, the above problem motivations and human-
centered AI research motivations have gained increasing levels of importance. At the
time of writing this document, new and powerful AI platforms, such as OpenAI’s
ChatGPT, have been introduced, and they have been rapidly propelled into society’s
focus due to their ability to benefit a variety of work domains, including software
development or even academic writing [437, 429]. However, in conjunction with this
introduction, tens of thousands of layoffs have occurred in the tech sector, and bil-
lions have been further invested in AI platforms [14]. While these platforms may
not be the direct cause of these layoffs, the timely introduction of these platforms
as well as their proposed capabilities pose a potential disruption to the workforce
[34], and workers’ perspectives on these technologies may begin to shift from interest
to hesitancy. As such, in solving the above research and problem motivation, this
dissertation demonstrates the damage that can be caused to technology acceptance
due to the implementation of AI systems as tools or teammates.
9
1.3.3 Relevant Concepts of Social Influence to AI Teammates
While this dissertation does not focus on directly extending existing literature
in the field of social influence due to its breadth and variety, it is still important to
explicitly reiterate this dissertation’s scoping of social influence in human-AI teams, as
the concept of social influence is highly broad [400, 240]. Using the definition provided
above along with the scoping of human-AI teaming, the concept of “interactions” is
being scoped to only refer to the interactions humans have with AI teammates when
both are working in a shared environment to complete a shared goal. In other words,
this dissertation examines how the teaming influence an AI teammate has on a shared
task and goal ultimately creates social influence on humans’ perceptions and behaviors.
This scoping places social influence as an interaction that is the result of natural
teaming processes and shared goals and not the result of targeted social influence and
manipulation done with the explicit goal of manipulating others [412, 98]. While the
concept of social influence has been used to better design the social components of AI
systems, such as the conversational language used by them [346], the explicit study
of human-AI social influence that arises from human-AI teaming influence has yet to
be fully studied, as mentioned previously.
Importantly, the concept of social influence sees explicit operationalization
within teaming research. For example, team interdependence is often wholly reliant
on the social influence received from teammate interaction, as efficient reception of
this social influence enables efficient interdependent teaming [165]. Furthermore,
beneficial teammate interactions with leaders often consist of leaders having social
influence through exemplar behaviors and not underhanded manipulation [409, 19].
The inclusion of technology into teaming has also enabled modern teams to better
mediate and use social influence in different hierarchies of work [256]. Thus, social
10
influence as a concept is not only relevant to teaming, but also to teams within the
digital age and AI agents as a whole. In other words, even if social influence changes
in its manifestation, it will still exist in a recognizable form within human-AI teams.
Despite the importance of social influence to teaming and the ability of digital systems
to impose social influence, technology-mediated/imposed social influence has not yet
considered the ability of AI teammates to possess and impose high levels of social
influence. Thus, the following gap exists: technology-mediated social influence has
yet to be studied from the perspective of AI teammates.
Furthermore, one cannot simply look at the role of AI in this social influence
process, but also the role of the human that will experience said social influence.
Within the field of social influence, the concepts of applying and receiving social
influence are known as persuasion [325] and susceptibility, respectively [4]. Rather
than diving into the technical specifics of these two concepts, a real-world example
provides a clear illustration. Take for instance two players on a basketball team:
one player has the ball (the human teammate) and the other is open for a pass
(the AI teammate). While both teammates are working towards a shared goal, an
AI can attempt to persuade the human teammate by getting open and asking for a
pass. However, the human may not pass as they lack susceptibility to AI teammate
social influence due to, say, negative perceptions of the AI teammate. Extending this
analogy, one can see that it is not enough for an AI to be persuasive in its use of social
influence, humans also need to be susceptible to an AI teammate’s social influence to
accept said social influence, in turn accepting the AI teammate. As such, given this
dissertation’s focus on both acceptance and social influence, these concepts will be
used as proxies for (1) the innate acceptance humans could have for AI teammates and
their teaming and social influence (susceptibility) and (2) the ways AI teammates can
be designed to better promote acceptance (persuasion). Thus, the following research
11
gap exists: Factors that mediate the susceptibility and persuasion of AI teammate
social influence have not been empirically examined.
1.4 Research Questions and Gaps
This dissertation answers a multitude of research questions targeting an un-
derstanding of social influence in human-AI teaming with a perspective of how their
answers may shed light on the potential acceptance of AI teammates. In Table 1.1,
these research questions are listed, and they serve as the center of discussion for the
entirety of this dissertation.
RQ# Research Question
RQ1 How does teaming influence applied by an AI teammate become a
social influence that affects human teammates?
RQ2 How do varying amounts of AI teammate teaming influence mediate
humans’ perceptions and reactions to AI teammate social influence?
RQ3 How accepting are humans to AI teammate teaming and social influ-
ence, and can AI teammate design increase acceptance?
RQ4 Does the role of AI social influence change in teams with existing
human-human teaming and social influence?
Table 1.1: Research Questions
Not only is this work guided by the answering of critical research questions,
but this dissertation also works to close multiple research gaps pertaining to human-
AI teaming, human-centered AI, and the social influence of AI technology. Thus,
Table 1.2 outlines how the research questions listed above target specific gaps, which
are derived from this dissertation’s problem and research motivations. Additionally,
the gaps listed are not exclusive to research domains, but actually, target gaps in-
cluding both research and practical efforts in human-centered AI. Closing these gaps
in addition to answering the above research questions ultimately ensure that the
contributions of this dissertation serve both research and industry efforts.
12
Research Gap Research Question
The explicit and literal exploration of the social
influence AI teammates have as a result of their
teaming influence in human-AI teams has not been
explored
RQ1, RQ2
AI teammate social influence has not yet been
explicitly studied in contexts with existing human
relationships and teaming influence from multiple
humans.
RQ1, RQ2, RQ4
AI teammate acceptance does not yet consider the
social influence associated with AI teammates’ teaming
influence, resulting in a miscalculation of potential
acceptance.
RQ2, RQ3
Technology-Mediated Social Influence has yet to be
studied from the perspective of AI teammates.
RQ1, RQ2, RQ4
Factors that mediate the susceptibility and persuasion
of AI teammate social influence are not empirically
examined
RQ3
Table 1.2: Research Gaps Being Closed By Research Questions
13
1.5 Summary of Studies
This dissertation is composed of three overarching studies. Each study is
summarized below; however, detailed breakdowns of each study are included further
in this document. Importantly, given the little understanding we have of AI teammate
social influence, this dissertation works from the perceptive of manipulating teaming
influence and linking these manipulations to the outcomes of social influence and
acceptance. In turn, an analytical connection can be made between AI teammate
social influence and acceptance.
The general design of this dissertation is that Study 1 provides a founda-
tional understanding of teaming and social influence, and this understanding is then
extended by Study 2 and Study 3 in different ways. The first study of this work
manipulates teaming influence by manipulating AI teammate behavior, and it details
how and why teaming influence ultimately becomes social influence. Study 2 extends
this understanding by explicitly linking teaming influence to acceptance by examin-
ing how variations in teaming influence impact acceptance. Finally, Study 3 extends
Study 1 by exploring the creation of social influence in more complex environments
where multiple human teammates exist, and it manipulates the number of AI team-
mates as well. In summary, each study has been linked to the Research Questions
it explicitly addresses in Table 1.3, and a figure representing the structure of this
dissertation can be found in Figure 1.2.
14
Figure 1.2: Graphical Representation of Studies
Study # Short Study Title Research Questions Addressed
1
Using Teaming Influence to
Create a Foundational
Understanding of Social
Influence in Human-AI Dyads
RQ1, RQ2, RQ3
2
Understanding Acceptance and
Susceptibility Towards AI
Teammate Social Influence
RQ2, RQ3, RQ4
3
Understanding the Impact of AI
Teammate Social Influence that
Exists Alongside Human Social
Influence
RQ1, RQ2, RQ4
Table 1.3: Studies that Address Each Research Question
15
1.5.1 Study 1: Using Teaming Influence to Create a Founda-
tional Understanding of Social Influence in Human-AI
Dyads
Through 2 sub-studies (Study 1a and Study 1b), Study 1 observes how the
more familiar concept of teaming influence ultimately impacts human-AI teams and
becomes social influence. This first observation is critical as the existence and docu-
mentation of AI social influence have not yet been explicitly done, meaning a foun-
dational exploration of this existence has to be documented. Study 1a examines
how varying levels of teaming influence change human performance and perceptions.
Study 1b provides an in-depth exploration of how an AI teammate’s teaming influence
becomes social influence.
For study 1a, Results indicate that high levels of teaming influence harm per-
formance, but if AI teammates decrease their teaming influence over time they can
“set the tone” and socially influence humans to improve their own performance. For
study 2b, results show that humans rapidly interpret AI teammate teaming influence
as social influence and quickly adapt, but some conditions need to be met for this to
happen. Specifically, humans need to feel a sense of control, justify their adaptation
through skill gaps or technology limitations, and observe AI teammate behavior to
determine how to best adapt. The results of these two studies not only verify the fun-
damental existence of AI teammate social influence but also empirically demonstrate
its impact on perception and performance.
16
1.5.2 Study 2: Examining the Acceptance and Nuance of AI
Teammate Teaming Influence From Both Human and
AI Teammate Perspectives
Study 2 of this dissertation is inspired by the most prevalent finding of Study
1: ideal teaming influence levels are not universal but a result of AI design and
humans’ past experiences. Furthermore, Study 1 saw that the optimal level of teaming
influence was not binary in that high or low levels were not ideal, but rather that ideal
levels of teaming influence exist on a spectrum that is highly personal. Thus, Study
2 utilizes two factorial survey studies to examine the acceptance of teaming influence
from the following angles: (1) the ideal level of teaming influence humans want when
given a higher fidelity spectrum of options; (2) how changes in the presentation of AI
teammates to humans can mediate humans’ levels of acceptance towards accepting
AI teammate teaming influence; and (3) the individual differences humans have that
can mediate their acceptance of AI teammate teaming influence.
For teaming influence allocation, participants often had declining perceptions
of AI as teaming influence increased across multiple tasks, but this was not the case
when teaming influence was allocated across a singular task, which saw the highest
levels of adoption likelihood when teaming influence was evenly shared between hu-
mans and AI. On the other hand, perceptions such as job security always trended
down as AI teaming influence increased, regardless of whether that teaming influence
was shared across multiple tasks or a single task. Additionally, these perceptions
were shown to be more positive when communicating AI as a tool, having coworker
endorsements, and having been previously observed by the participant. Finally, in-
dividual differences results show that common individual differences measures were
not consistently associated with AI teammate adoption, except for in the case of
17
one’s perceived capability of computers, which had a positive relationship with AI
teammate adoption. These results demonstrate that in regard to AI teammate ac-
ceptance, the teaming influence and design of AI teammates have demonstrably more
impact than commonly measured individual differences, which is often the case with
technology acceptance.
1.5.3 Study 3: Understanding the Creation of AI Teammate
Social Influence in Multi-Human Teams
Given that teaming influence is not solely contained to human-AI relation-
ships, the contributions of this dissertation would not be complete if they did not
examine how AI teammate teaming and social influence impact human-AI teaming
when multiple sources of teaming influence exist. Thus, Study 3 focuses on the role
AI teammate social influence can play outside of dyad teams where human-human so-
cial collaboration is present. Additionally, the manipulations for Study 3 manipulate
the level of teaming influence of AI teammates while also manipulating the amount
of preexisting experience human teammates have with each other, with the goal of
increasing prior experience with human-human teaming influence. However, unlike
Study 1, the amount of AI teaming influence is manipulated by varying the num-
ber of AI teammates applying teaming influence rather than the frequency a single
teammate applies teaming influence. This is an important difference, as AI teammate
teaming influence as defined through the number of AI teammates present will likely
become more salient over time due to an increasing prevalence of AI teammates within
teams. Study 3 also examines the impact of this teaming influence on human-human
and human-AI relationships through the lens of interdependence, which is a critical
consideration in teaming and social influence as stated above.
18
Results show that there is actually a gap between human-human and human-
AI perceptions of interdependence with humans often perceiving themselves to be
significantly more interdependent with other humans than AI despite AI teammates
having significantly higher perceived performance. Additionally, this gap grows as the
amount of AI teaming influence increases through population increases. However, an
evaluation of the qualitative data shows that, while perceived interdependence less-
ened for AI teammates, humans became more behaviorally interdependent with AI
teammates as a result of increasing teaming influence. Thus, Study 3 derives that
as AI teammate teaming influence becomes social influence, said social influence can
negatively impact humans’ perception of AI teammates, but it can positively impact
their behavioral adaptation around AI teammates. Additionally, human-human rela-
tionships and perceptions were shown to be strengthened by humans creating deeper
understandings of their human teammates and having much stricter expectations for
their AI teammates. Given these results, Study 3 concludes with the finding that AI
teammate teaming influence will become social influence in multi-human teams, but
humans prefer the presence of human teaming and social influence in these teams.
1.6 Conclusion
With the technology required to create AI teammates becoming more of a
reality every day, there still exists a gap that will prevent the initial formation of
human-AI teams. Specifically, while a large portion of research has acknowledged
the teaming influence AI teammates will have, this dissertation is the first to broadly
and explicitly study the social influence that stems from teaming influence to create
lasting change in human teammates. Moreover, each study provides a unique and
novel contribution. Study 1 provides one of the first explorations of how teaming
19
influence ultimately becomes social influence in human-AI teams and how variances
in teaming influence change the impacts of social influence. Study 2 is one of the first
studies to explicitly examine the acceptance of AI teammates, especially in terms of
the acceptance of AI teammate teaming influence. Finally, Study 3 provides one of
the first explorations of if AI teammates teaming influence can become social influence
when multiple humans exist in a human-AI team. The novel contributions from these
three studies provide a foundational understanding of the existence of teaming and
social influence in human-AI teams and the relationship between said influences and
human acceptance. Thus, this dissertation enables researchers and practitioners alike
to ensure that the potential benefits of human-AI teams are not squandered because
humans reject the concept and block its initial formation.
20
Chapter 2
Background
Before diving into the specific studies that comprise this dissertation, a deep
dive into the past research that this work is based on should be discussed. Specifically,
this work builds on three different areas: (1) human-AI teaming; (2) human-centered
AI and AI acceptance; and (3) social influence in relevance to human-AI teaming.
For (1), human-AI teaming is projected to be a highly opportune context for AI to
be integrated and will serve as the contextual motivation that helps scope the con-
tributions and environments that this dissertation targets. For (2), the domain of
human-centered artificial intelligence will serve as the problem and deliverables moti-
vation of this dissertation, meaning the outcomes and interpretations of this research
are targeted toward human-AI teaming but from a human-centered AI perspective.
Lastly, for (3), the components of social influence that are relevant to human-AI
teaming serve as the theoretical and historical motivation for this work. These three
domains are discussed below within these contexts and will enable the later presen-
tation of the three studies within this dissertation.
21
2.1 Human-AI Teamwork
The domain of Human-AI teaming serves as a highly interdisciplinary cross-
roads between computing, teaming psychology, and human-computer interaction do-
mains. Moreover, this interdisciplinary nature often results in a fast-paced and turbu-
lent research community that is racing to maintain pace with computationally driven
research. In fact, this turbulence is one of the key motivators for this research, as new
innovations are created every day and continue to make AI teammates more viable.
However, this viability comes at a cost, and that cost is that of social influence. While
technology has historically mediated social influence in teams, AIs possess both the
qualities of technology and teammate, meaning they will mediate social influence as
a technology while also owning and utilizing social influence as a teammate. This
merger is what makes AI teammates different from basic AI systems that are used
as tools. Specifically, this work examines the important components of human-AI
teaming: (1) its current state; and (2) the recent shift towards human-factors that
are driving the domains future. This work uses these components to define the en-
vironmental context in which the studies and contributions of this dissertation are
aimed.
2.1.1 The Current State of Human-AI Teaming
As of now, human-AI teaming is still in its infancy; however, definitions and
instances of human-AI teaming have begun to appear in recent years. This study
utilizes the following definition of human-AI teaming: “at least one human working
cooperatively with at least one autonomous agent, where an autonomous agent is
a computer entity with a partial or high degree of self-governance with respect to
decision-making, adaptation, and communication” [329]. In other words, agents or
22
AI teammates within human-AI teams must have at least some teaming influence over
their own decisions, adaptation, and communication. Human-AI teaming’s potential
to society and the workforce is clear as evidence by a wide range of conceptual and
empirical research. For example, human-AI teams’ performance has the potential to
far surpass human-human teams if AI teammates are designed correctly [285, 110],
and in domains where human-human teams may remain superior, human-AI teams
can serve as high-quality training methods [299]. Additionally, the incorporation of AI
teammates is one that requires a transition of human workers from roles that perform
repetitive tasks to those that require nuance and high-level problem solving, thus
allowing human and AI teammates to complement each other [392, 199, 44]. Thus,
shifting from human-human teaming to human-AI teaming centers around more than
just adding an AI teammate to a team, but rather a realignment of existing goals,
roles, and behaviors in existing teams.
This shift does not mean that maximizing the contribution of an AI-agent in
turn maximizes the benefit to a human [40, 43, 39]. Rather, the creation of effective
human-AI teams needs to be guided by the effective use of both humans and AIs.
Unfortunately, this balance is not always a given due to the fractured nature of
human-AI teaming research. Specifically, AI-agent research is often conducted from a
computer science perspective, which for a long period of time was agnostic of potential
human collaborators and has been more focused on algorithmic design and validity
of AIs [354]. On the other hand, human research has historically been conducted by
psychology and human-factors researchers who lacked the expertise to build real-AI
systems and were often relegated to Wizard-of-Oz studies where a human mascaraed
as an AI [102, 276, 443, 399]. While this human research produced important findings,
its use of human stand-ins for AIs resulted in research results that are more advanced
than the current state of AI research [285], which means not only do computational
23
and human research come from two different domains, but also two different temporal
perspectives. Human research looks further in the future while computer research is
looking at the tools available now; however, these two directions have continuously
drifted towards each other and recently begun to intersect with the creation of multiple
real-world autonomous systems being pioneered by human-factors research groups
[38, 299, 364], and more computer science oriented research beginning to consider
human compatibility [42, 41].
Fortunately, this merger is rapidly advancing the potential application of
human-AI teams in a variety of contexts, which provides a clear motivation for how
wide-reaching the contributions of this dissertation will be. For example, military
contexts have repeatedly shown interest in the integration of AI-agent systems, es-
pecially alongside humans [80, 37, 394, 71, 369]. Moreover, the military domain has
already recognized the importance of both technological and human advances in the
creation of human-AI teams, which has led to the formation of multiple funding pro-
grams targeted at the intersection of these topics [321, 320, 319]. Importantly, one
of these funding programs even includes attention towards the concepts of trust and
influence as influence and the organization of it have been a critical component to
military domains [322]. Similarly, medical teams can widely use human-AI teams
whether its in data management and diagnosis [10, 238, 69, 188, 413, 238], patient
care [214, 162, 210, 55, 23, 279], or even surgery teams [84, 173, 174, 232, 483, 293].
Other domains that could benefit from human-AI teaming include search and res-
cue [347, 8, 12], finance trading and planning [351, 342, 195] and manufacturing
[258, 139, 213, 57, 169, 389], just to name a few. Due to the scoping of this disserta-
tion, it is not important to dive any further into the application of human-AI teaming
within these domains; however, highlighting the wealth of research coming out of all
domains is important for demonstrating the breadth of application society will see
24
for human-AI teaming. Importantly, as computing and human-AI research advances,
the application of human-AI teaming to these domain areas inches ever closer, and
new potential contexts for application are appearing often.
2.1.2 Human Factors and Their Importance to Human-AI
Teaming
As the interdisciplinary field of human-AI teaming has grown closer over the
years, research has increasingly identified the criticality of human-factors. While the
concept of human-centered AI has come a long way (discussed later), teaming derives
not from human-centered technology but from human-human interactionThis does
not, however, mean that the later discussion on human-centered AI is irrelevant,
as the highly human nature of teaming makes general human-centeredness critical
[471, 458]. Despite this, it is not enough for an AI teammate to utilize basic human-
centered design, but rather they must be designed with teaming factors in mind [480].
Specifically, human factors within teaming have been a major component of teaming
research, and as human-centered AI has grown, so too has the wealth of research
targeting human factors in human-AI teaming.
Specifically, it is important to review which human factors are being researched
within human-AI teaming, their identified importance to human-AI teaming, and
their relationship to influence within a team. It’s also important to mention that
efficient human-AI teaming requires the holistic conclusion of all of these factors,
including general human-centered design; however, that cannot be done until each
factor is explored on their own [81]. Thus, this work contributes to the exploration of
social influence, but it also considers other human-factors and their relationship with
teaming influence, ultimately advances our understanding of other human factors in
25
addition to social influence. Moreover, this dissertation will review research in team
cognition, ethics, and trust to elicit lessons from other work in human-AI teaming
that will help guide this dissertation, while also discussing the minimal attention
social influence has received in human-AI teaming research.
2.1.2.1 Team Cognition
Firstly, team cognition is one of the most critical factors in successful teaming
and has become a factor in human-AI teaming that is gaining popularity [384, 382,
383]. Despite the criticality of team cognition in teaming, its broadness has served
as a roadblock that prevents its empirical exploration, which has led research in
human-AI teaming to first target individual subcomponents of team cognition, such as
shared knowledge, shared awareness, perceived team cognition, and communication.
For example, empirical research has identified how the inclusion of AI teammates
within a team can influence the similarity of shared knowledge in humans [40, 390,
298]. Moreover, shared awareness has become a hot topic within human-AI teaming
as the control and organization of multiple AI teammates can be reliant on shared
awareness [73, 387, 126], which means human-AI leadership, and thus social influence,
would be connected to team cognition through the importance of shared awareness.
However, despite its exploration, team cognition can still be considered an under-
explored human factor in human-AI teaming as important milestones in the concept
have yet to be explored. For instance, interactive team cognition, a viewpoint of team
cognition that is more dynamic [97, 96], has yet to receive exploration in human-AI
teaming despite its potential benefit.
Thus, a conclusion can be made from the current state of team cognition re-
search in human-AI teaming: the initial exploration of a human-human teaming topic
in human-AI teaming does not finalize its contribution. This conclusion is why this
26
dissertation tackles social influence from a variety of different angles, sources, and
applications as a single study about the potential for AI social influence would not
be sufficient to explore the topic’s importance to human-AI teaming. Moreover, this
dissertation does not position itself as the final exploration of human-AI teaming influ-
ence due to the broadness of research that would require. For example, although this
dissertation targets behavioral changes that mediate social influence, organizational,
social, and societal changes would require further exploration.
2.1.2.2 Ethics
Secondly, ethics have become an important consideration for human-AI team-
ing as the topic has become a highly integral part of human-centered AI, and this
importance has been carried over into human-AI teaming. Specifically, AI ethics
have been repeatedly studied as it is a societal concern and interest that has trickled
into the research community [152, 151, 261]. For example, societal perceptions of AI
systems within humans can be impacted by the depiction of technology in the media
[79, 60]. This is similarly related to technology-mediated influence (discussed in detail
later) as it is the concept of media effects, which relates to the impacts media can have
on perceptions [393, 414]. Thus, the potential for AI to have real-life consequences
in humans means that AI’s design and implementation has to be ethical. In turn,
the exploration of ethics and how ethics can play a role in human-AI teaming has
begun both conceptually [143, 131] and empirically [259, 448]. However, one thing is
clear from this early exploration of research and that is the importance of individuals’
previous perceptions about AI and ethics when working in human-AI teams and how
society has created these perceptions [480, 82].
Based on this review of ethics, an important conclusion can be drawn: the
exploration of human-factors within human-AI teaming should not be devoid of the
27
societal context of which humans view those factors. Thus, while this dissertation
explores social influence within individual teams, it also works to consider the social
influence AI has in society, the coming growth of AI in society, and the importance of
people’s lived experiences and differences, all of which will be drivers for the growth
and application of AI social influence. Without these considerations, the external
validity of the work produced by this dissertation would be highly lacking, and the
contributions would be greatly reduced.
2.1.2.3 Trust
One of the most critical and highly researched human-factors in human-AI
teaming is that of trust. Empirical research has not only shown but quantified
the effects on trust of incorporating AI teammates in place of human teammates
[284, 283, 112]. Even the simple belief that a person is working with an AI teammate
will cause their trust in that teammate to drop [298]. Thus, there exists a gap between
the trust that can be formed for human teammates and trust for AI teammates, and
it is the duty of human-AI teaming research to work to close this gap. Additionally,
a multitude of factors have been shown to impact the trust within a human-AI either
positively or negatively. For example, the existence of spatial information for human
teammates can improve the trust humans have in AIs [390]. Additionally, communica-
tion patterns can also be a large reason as to why human teammates would trust their
artificial teammates [236, 115, 53]. Specifically, AI-agents who explain the rationale
behind their decision and actions are also trusted more by their human teammates
than AI-agents who do not [303, 368]. Even the use of transparency where AI-agents
are honest about their confidence in a decision can be a deciding factor on whether a
human teammate trusts their AI teammate [82, 288, 183]. Importantly, trust within
the context of AI has begun to be linked with the concept of AI acceptance, making
28
its potential relevance to early human-AI teams all the more critical.
Even the quality and prioritization of other human factors can greatly impact
trust within human-AI teams. For instance, varying levels of team cognition and
its components, such as situational awareness or mental model, similarity can have
significant effects on trust in human-AI teams [390]. Additionally, the identifiable
ethics of an artificial teammate can impact various factors within teaming, including
trust [259]. Finally, in regard to social influence and this dissertation, modifying
the behavioral design of an AI teammate to override the contributions of a human
teammates (i.e., attempting to obtain all teaming influence) can negatively impact
trust, even between human teammates [140].
Based on this review of trust, an important conclusion can be made: human-
factors should not be viewed in a vacuum separate from other human-factors. Thus,
the exploration of social influence in human-AI teaming conducted by this dissertation
also considers the importance of various human-factors, such as trust, during each
study.
2.1.2.4 Social Influence, an Underexplored Factor
In addition to reviewing research on more populated topics, it is important
to discuss areas of research that have yet to receive heavy exploration or have been
explored from a mostly conceptual perspective despite being factors critical to team-
ing. Social influence serves as an underexplored concept in human-AI teaming despite
it being a critical factor to teaming [164]; primarily because its exploration within
human-AI teaming has not yet received an empirical foundation and has been ex-
plored mainly both indirectly and theoretically through research involving concepts
like coordination [58, 439] or leadership [142]. Moreover, the research around the
above types of social influence is often concerned with how AI systems change ex-
29
isting leadership or coordination, but there is a large blind spot in how AI systems
can contribute to social influence explicitly. Thus, This dissertation focuses on not
only how AI can impact human-influence, but also how humans can impact AI social
influence, which will in turn help these other underexposed research areas do the
same.
2.2 Human-Centered Artificial Intelligence and De-
signing for Artificial Intelligence Acceptance
While the review of Human-AI teaming serves as an environmental motivation
for this work, human-centered artificial intelligence serves as a motivational lens that
helps to better focus both the concept of social influence and the contributions of this
work. Despite this work heavily contributing to the present and future of human-
AI teaming, the tangible and applicable outcomes of this dissertation will take the
form of design recommendations for AI systems and agent teammates. Importantly,
these design recommendations will be derived from the three human-centered studies
in this dissertation, which is critical to ensure the design recommendations created
are human-centered. However, before those design recommendations can be made or
studies can be created, the current research domain that is human-centered artificial
intelligence must be reviewed. Specifically, two areas need to be reviewed: (1) the
concept of technology acceptance and its applicability to AI; and (2) the existing
design recommendations for human-centered AI that may relate to AI social influence.
Importantly, the design recommendations made by this dissertation will not only need
to consider but also build upon these two areas to ensure that recommendations are
applicable, iterative, and acknowledging of the community’s current work.
30
Before discussing the above mentioned components of human-centered AI, it is
important to provide a brief overview of what it is, what the goals of the research do-
main are, and why these goals are important. Broadly speaking, human-centered AI
looks to balance the technical advantages of artificial intelligence with unique human
factors advantages provided by humans [470]. This balance can include ensuring hu-
man acceptance (the focus of this dissertation), ensuring ethicality, ensuring usability,
and ensuring integration alongside other technologies and processes [403, 401, 361].
Achieving this balance often centers around human-centered design processes that
place these human factors at the center of research and examine how the design of
technology ultimately interacts with these factors [31]. Unfortunately, if this bal-
ance is achieved, it is often done at the cost of AI performance, but recent work has
more heavily focused on preserving AI teammate performance while still encouraging
human interaction [402]. While the broader concepts of human-centered AI are im-
portant to this dissertation, the human factor of technology acceptance and the design
recommendations that mediate it are most relevant and will be discussed below.
2.2.1 AI Acceptance and Variances in it
Before discussing human-centered AI, this dissertation must first discuss what
it means for technology, in general, to be human-centered. Specifically, the first
thing to be discussed is the acceptance of technology because AI is going to enter
existing workforces and existing teams, which means that it must be accepted for this
entry to be embraced [305]. Historically, the introduction of new technologies has not
always been holistically accepted by humans [105]. However, over time, humans often
become more accepting and willing to use technology [189, 453]. For instance, the
initial integration of automation and technology into the manufacturing workforce is
31
not consistently accepted by workers, but over time the integration becomes more
accepted by workers due to norm establishment or other factors [72, 234]. However,
the lack of initial acceptance of these technologies could ultimately prevent humans
from benefiting from them.
Ultimately, these historical factors along with the designs of new technologies
have been synthesized into the Technology Acceptance Model (TAM), which details
how the features of a system affect the ease of use and perceived utility of that system,
which in turn impacts the attitude towards using that system, which in turn impacts
the actual use of that system [105]. However, the initial introduction of this model was
incomplete and has been heavily revised over the years to include other factors, such as
cultural considerations and individual differences [7, 264, 187, 243, 281]. Importantly,
the most relevant iteration of the TAM to this dissertation is the TAM3, which more
heavily considers the mediating role of individual differences while also expanding on
the concept of ease-of-use to go beyond simple manual usage of a technology, which is
critical to ensuring relevance to AI technologies [451]. These refinements are critical
as the introduction of AI as a novel technology merits potentially novel considerations
of its acceptance, thus leading to the current linkage of AI systems to the TAM in
the present research domain [411, 222, 159].
Currently, the only instance of social influence considered within the TAM is
the external social influence people use to pressure others into using technology. For-
tunately, there is already a moderate degree of acceptance of AI within the world, as
the perceived utility of the technology is relatively high [167, 326, 108]; however, this
acceptance may come at a cost. For example, self-driving vehicles, a highly accepted
form of AI [177, 304], have caused human deaths due to faulty systems and program-
matic errors [254, 54]. However, the perceived utility of the technology, which is a
central component of the acceptance of technology, has allowed testing and research
32
to continue mostly unhindered. While the TAM does take into consideration the exis-
tence of societal influence, that concept can go both ways if social influences go against
the technology [451]. Specifically, individual differences may play an important role
in mediating external societal influence, and thus the acceptance of AI teammates,
as past AI systems have disproportionately targeted specific populations [474]. For
example, Microsoft’s Tay (an AI-powered twitter account) quickly became a Neo-Nazi
that hated Jewish people [465, 306, 274], and even Facebook algorithms have been
shown to discriminate disproportionally against women [18, 287, 219]. Thus, while
the concept of ease-of-use requires critical iterations to provide AI teammate rele-
vancy, the consideration of individual differences considered by the TAM may also
need to be updated due to AI’s plagued history in discrimination. Below, we discuss
potential individual differences, such as age, race, gender, and computer anxiety, that
may yield strong connections to AI teammate acceptance due to negative experiences
associated with AI.
2.2.1.1 Age
As a first example of individual differences, older individuals are often less
accepting of new technologies and often underutilize new technology [186, 88, 310,
83, 26, 180]. This is unfortunate as AI technologies, such as self-driving vehicles, have
the potential to disproportionately empower older individuals as they are unable to
drive more often [255, 370, 371], and this disparity in acceptance can extend to other
domains, such as e-learning [278]. Furthermore, older generations were the most
affected by the I3.0 movement and have had difficulty adapting to the integration of
technologies into the workforce [119, 33, 91, 245]. Therefore, despite being poised to
benefit a lot from AI technology, older people may be uniquely harmed by previous
integration of automation and autonomy, leaving them with a roadblock that prevents
33
AI acceptance. Moving forward, these cultural considerations along with the general
distrust older adults have will become a greater consideration to ensure older adults
accept the social influence of AI in their personal and work lives. Thus, if AI is
going to be accepted by, and in turn benefit older individuals, then research into the
technology needs to be considerate of the impacts this individual difference can have.
2.2.1.2 Race
Unfortunately, incorrect uses of AI could also make race an important indi-
vidual difference as past AI systems have negatively harmed non-white communi-
ties. For example, the COMPASS system contained large degrees of AI bias that
ultimately led to the harming of the lives of multiple black individuals by dispro-
portionately determining incarcerated black individuals as not suitable for release
[251, 356, 192, 63, 132]. Another example would involve the existence of racist image
classification algorithms or skin detection software that negatively target non-white
individuals [64, 9, 418, 163]. To combat these issues, recent research has worked to
remove the racist biases from AI systems and empower them to make more fair and
unbiased decisions [248, 145, 48]. Moreover, these efforts have been conducted both
from the AI development and the human side of research, as these biases are not
unique to AI, but rather reflections of evil within society [404, 479]. Specifically, crit-
ical work has identified the importance of context in data, which is often removed for
AI training [398, 235, 473]. Unfortunately, these past transgressions might ultimately
create negative perceptions in human teammates that present them from accepting
and working with future AI teammates. Thus, future AI research should not only
be aware of the potential impacts that race could have as an individual difference
but also incorporate the societal context into research during design, analysis, and
interpretation.
34
2.2.1.3 Gender
Similar to race, sexism in AI has been a repeated concern within society and
the research community [485, 185]. Past AI work, in addition to biasing white in-
dividuals, has also been known to bias gender in a variety of ways. For example,
a lack of diversity in AI research and development has implicitly created harmful
biases within AI systems [442, 146, 128]. In addition, companies such as Apple and
IBM have created AI systems and technology that have ignored critical factors in
women’s health, such as menstrual cycles and gender differences in treatment safety
[421]. Natural language AI has also been shown to bias men over women for certain
professions or discussions, leading to perpetuation of harmful stereotypes by AI sys-
tems [153]. Moreover, AI policy work often lacks explicit considerations for gender
or racial fairness, which means future policy work and research should target these
individual differences [56]. And, while both racial and gender biases are both nega-
tive, research has even shown that gender biases in AI are often less questioned than
biases for other individual differences [336, 166], which means individual differences
must be viewed individually and explicit as their relationship to AI and society is not
always equivalent. Similar to race, while gender is not explicitly link with the accep-
tance of AI teammates, past biases that have been gender motivated may ultimately
create negative perceptions against AI teammates. Thus, moving forward, we can see
that the consideration of gender, and all individual differences cannot happen just at
the design stage, but it must happen throughout conceptualization, design, analysis,
and publication. Without this consideration in AI research, continued damage will
continue due to sexist, racist, ablest, or ageist AI systems.
35
2.2.1.4 Other Individual Differences
While the three differences discussed above are the most prevalent in AI re-
search, they are not an exhaustive list. Other potential individual differences that
have been linked to the technology acceptance model and could impact AI acceptance
include: technology expertise [475], job seniority [66], media exposure [241], personal
innovativeness [363], and cultural differences [416]. Research has shown that these
individual differences play a more mediating role in technology acceptance compared
to ease of use and perceived utility [67]; however, the importance of individual dif-
ferences in teaming [469, 395, 275, 247] means that their impact may be greater in a
human-AI team and is worth further exploration.
2.2.1.5 Technophobia, Computer Aversion, and Algorithm Aversion
While the above individual differences can be viewed as lower level differences
that are often captured by demographic questionnaires, more complex individual dif-
ferences can play a role in technology acceptance. For instance, the TAM3 explicitly
provides consideration for how human anxiety toward computers can impact their ac-
ceptance [451]. However, this type of anxiety could be viewed from various different
angles in regard to its applicability to AI acceptance. As an example, technopho-
bia and algorithm aversion could also be viewed as forms of computer anxiety that
might prevent humans from interacting with AI systems [118, 117]. Technophobia
generally refers to a general fear of new technology [61], while algorithm aversion
takes this one step further by demonstrating how humans can be physically adverse
to AI algorithms, which may be caused by a combination of factors including techno-
phobia and computer anxiety [318]. Regardless of their differences, these factors can
have similar impactsof users not wanting technologies to help despite their poten-
36
tial utility [217, 197, 367]. These factors are different from the individual differences
found commonly in humans, as they are not demographic characteristics, but rather a
perception that users carry with them when entering the human-AI interaction [366].
Importantly, these factors can also be influenced by other individual differences, mak-
ing them more complex and dynamic perceptions that may not be initially apparent
[155, 313, 425]. Thus, it’s important to note that the consideration of individual dif-
ferences shouldn’t just stop at general demographic information but must actually dig
deeper into the preconceived notions that humans may have before interacting with
AI systems, which may in turn impact their acceptance and interaction with those
systems. Although this is done within some components of the TAM, the potential
consideration of teaming means that the broadness of relevant individual differences
may increase.
The above sections demonstrate how individual differences, a fundamental
principle in human factors research, has unknowingly become one of the most im-
portant factors in ensuring the acceptance of technology. Thus, if AI is going to be
human-centered and, in turn, accepted, explicit inclusion and consideration of these
individual differences is necessary in both research, design, and implementation. Un-
fortunately, past inconsideration of these individual differences has already caused
damage that prevents the acceptance of AI technology in real-time. Even AI domains
fortunate enough to gain some public acceptance can still lack consideration of these
differences and harm humans. Thus, designing AI systems to be human centered
includes designing AI systems to be liked and accepted by humans, and gaining this
acceptance may be easier for some people than others. Moreover, this acceptance be-
comes even greater when discussing teamwork and social influence, as the impact of
an influential teammate is not only significant but also more permeating. Therefore,
this dissertation will also work to provide a better linkage between existing technology
37
acceptance models, human-AI teaming, and individual differences.
2.2.2 Current Design Recommendations and Their Relation-
ship to Teaming and Social Influence
While individual differences play a key role in human-centered AI and technol-
ogy acceptance and will similarly play a role in the acceptance of human-AI teaming,
a large portion of modern human-centered AI is focused on the first two components of
technology acceptance: ease of use and perceived utility. Specifically, human-centered
AI research has worked to create design recommendations that allow AI designers and
developers to improve these two factors, and the importance of these recommenda-
tions has led researchers to synthesize and categorize them [22]. Given the importance
of technology acceptance and the TAM as identified above, this dissertation will dis-
cuss these recommendations in regard to their relevancy to ease-of-use and perceived
utility. Additionally, this work will discuss recommendations related to algorithm
aversion and technophobia as they have gained recent traction as factors influencing
AI acceptance due to their relevancy to computer anxiety. As an aside, it would be
impossible for this dissertation to discuss every design recommendation proposed by
previous research, so the following discussion is not intended to be exhaustive but
rather representative.
2.2.2.1 Design Recommendations for Ease-of-Use
At a basic level, a large degree of recommendations created for the ease-of-
use of past technology interfaces are relevant for interfaces used to interact with
AI systems, such as Neilson’s usability principles [311]. More specifically, research
in AI-powered recommender system UI recommends that interfaces use more visual
38
components [223], the ability to interact with AI-provided content [292], and limit
the amount of content the system provides to the user [328]. Additionally, research in
AI-driven personal assistant interfaces recommends that UI’s utilize voice and audio
interaction [449, 427], utilize more conversational speech [104], and aesthetics should
vary to allow user customization [472]. Importantly, the design of visual aids can
also provide significant improvements to AI teammate perception and performance
by increasing awareness, meaning these elements of UI design still hold relevance.
However, ease-of-use in AI systems is not limited to simple UI design, as AI possesses
a degree of autonomy that allows it to operate independently of humans, meaning its
“use” would not be similar to normal technological tools [124]. Moreover, the level of
autonomy AI is capable of achieving every daylowers the need for human intervention
and thus potentially increases the “ease-of-use” of AI [135]. Research has empirically
linked the level of autonomy as a key factor in ease of use and thus user acceptance,
with high degrees of autonomy leading to high degrees of acceptance, resulting in
researchers often recommending high degrees of autonomy in specific contexts [374].
Therefore, rather than viewing AI from a traditional tooling ease-of-use perspective,
it may be more productive to extend the concept of ease-of-use in AI to consider
disruption to existing workflows as a result of autonomy, as past research would
recommend minimizing this disruption and ensuring AI’s integration is frictionless
[125, 410, 224, 344]. Included in this, research also recommends updating AI systems
both iteratively and cautiously as to not create large disruptions [22]. Additionally,
research recommends designing AI to adopt or consider social norms as they are
integral to effective teaming [22, 381, 339]. Thus, the above recommendations can
be synthesized into the recommendation: The interactive modalities for human-AI
interaction should prioritize existing ease-of-use recommendations, but the design of
autonomy should look to minimize the friction it introduces in existing work settings.
39
In regard to teaming influence, ease of use would have heavy implications.
Potential increases in teaming influence would carry with them an increasing potential
to disrupt existing work processes. Thus, if AI teammates are designed to be highly
influential, then they must be designed to be easy to use or humans will not be able
to adapt to a high degree of social influence. That does not, however, mean that AI
teammates should simply be designed to not have teaming influence and impact on
a team; rather, it may be more helpful to prepare teams for that potential teaming
influence and encourage them to accept it, in turn reducing potential friction.
2.2.2.2 Design Recommendations for Perceived Utility
Increasing the perceived utility of AI systems has come from both a computa-
tional and a human perspective. For instance, major strides have been made in the
field of AI to increase the practical, performative abilities of AI systems, which is im-
portant for its perceived utility [161], including: promising results regarding vaccine
and drug discovery [372, 216, 27, 436]. increases in cancer detection and classifica-
tion [229, 352, 270], and even the ability to generate new and unique music and art
[268, 127, 263, 434]. However, as discussed in more detail earlier, those simple in-
creases in performance do not make a system human-centered, and work has recently
shifted toward recommending the assurance of human compatibility [40, 42]. For ex-
ample, at the computational level, AI explainability is becoming one of if not the most
important recommendations, where the community has noted that the black-box na-
ture of AI must be avoided [29, 28]. Enabling this explainability not only allows a
greater understanding of an AI’s perceived utility, but also allows oversight in a way
that is currently lacking while still being important to humans [350, 5]. The recom-
mendations of explainable AI from a computational perspective have been researched
in human-AI teaming with highly promising results. From a teaming perspective,
40
these recommendations may similarly benefit ease-of-use as they can make it easier
for a human to understand an autonomous system [330]. Thus, work is shifting the
perspective away from AI performance and more towards AI utility, which is a more
complex concept than simple performance.
Additionally, from the more human side of research, significant research efforts
have been made to show users how capable an AI system is both at the computational
and social levels. For example, past efforts have recommended communicating the
computational process of an AI system through visual representations [419, 420].
Furthermore, these visualizations, despite being computational in nature, are often
better received when designed to be abstract and simplistic, meaning that humans
do not want the exact details of a system but do want to see how the sausage is
made, so to speak [190]. For example, recent research highlighting the effectiveness
of these visualizations with younger audiences [359] has shown promise in this area.
Research has also recommended explicitly providing users with the potential benefits
that an AI system will provide them [435]. Specifically, explanations of these potential
benefits need to also target the motivations of specific users to ensure perceived utility,
which means individual differences will be especially important in AI explanation
[301]. While this is not an exhaustive list of recommendations that can improve AI’s
perceived utility, two things are clear: (1) perceived utility is both a computational
challenge and a social challenge; and (2) perceived utility is not a universal metric as
each individual user has different motivations and differences.
In regard to social influence and AI teammates, the above design recommen-
dations could play a pivotal role in the acceptance of AI social influence. For instance,
while an AI system may have the potential to be highly influential and complete a
task on its on, human compatibility may dictate that AI teammates need to allow
humans a higher degree of teaming influence than computationally optimal. Sim-
41
ilarly, if an AI teammate were programmed to have an excessively high degree of
teaming influence, accompanying that teaming influence with an explanation of how
and why it is necessary could help users better understand and accept the benefits of
the ensuing social influence. Moreover, the explanation provided to the user should
not center around how the AI’s social influence will benefit the task, but rather how
the AI’s social influence will benefit the user’s motivations, such as reducing their
workload or allowing them to focus on big-picture work. Unfortunately, the benefits
of implementing those recommendations may not be perceived if humans are outright
averse to AI teammates from the start.
2.2.2.3 Design Recommendations for Technophobia, Computer Aversion,
and Algorithm Aversion
While technophobia, computer anxiety, and algorithm aversion are uniquely
different, design recommendations for them can be discussed simultaneously due to
their similar impacts in human-AI interaction as one may feed into another. For
instance, recent research has recommended that promoting the growth of human-AI
relationships would be a beneficial choice in helping reduce the impacts of techno-
phobia [323]; similarly, repeated experience and relationship growth with AI systems
has been suggested as a means of reducing algorithm aversion [138], which is already
another important component of the TAM3 [451]. Past work has also shown and
recommended technology education as a means of reducing technophobia [366, 323],
which may relate to educating users about the perceived utility of AI systems. This
last recommendation additionally becomes increasingly important when interacting
with older adults [308, 93]. Moreover, research has also shown that this education
should go beyond the simple capabilities of the system but also discuss how the system
will impact the lives of the user [396]. Research has additionally suggested allowing
42
users a “trial period” where they are allowed to try out the technology but do not
have to commit to using it [334]. Furthermore, work has shown that focusing on user
enjoyment and fun can actually allow users to replace fear with more positive emo-
tions [295]. Finally, from a work and organizational perspective, recent research has
recommended having an organizational culture that is more positive towards algo-
rithms to create a buffer from technophobia [218], which could be viewed as a version
of social influence that is more tied to team norms. Importantly, these design recom-
mendations should not be ignored, as technophobic or algorithmically adverse users
can be common and should thus be designed for from the start [207].
Based on the design recommendations provided above, it is clear that algo-
rithm aversion, computer anxiety, and technophobia need to be considerations both
before and during the introduction of a new technology. Thus, to mitigate the poten-
tial resistance users would have towards AI teammate social influence, researchers and
practitioners should: (1) educate users on how an AI teammate will influence them
and how that influence will change their workflow; (2) provide humans with a long-
term trial period where they can become comfortable with the AI teammate’s social
influence; and (3) maintain a positive attitude towards the benefits of the AI team-
mates integration. Unfortunately, given the short term nature of this dissertation’s
experiments, (2) may not be fully realized; however, these design recommendations
will be used during experimentation alongside the manipulations of each experiment
to mitigate resistance towards AI social influence.
2.3 Social Influence in Teamwork
At the core of this dissertation is the concept of social influence; however,
social influence is often a broad term that can change depending on the situation.
43
While the definition of social influence is provided earlier in this dissertation, it is
worth reiterating now in light of this background section: “change in an individual’s
thoughts, feelings, attitudes, or behaviors that results from interaction with another
individual or a group” [355]. Furthermore, when it comes to teaming, this disser-
tation views that the effects of this social influence are often concentrated on the
perceptions, actions and effectiveness of teammates that occur during standard team
interactions [164]. However, given the broad nature of social influence, this review is
going to focus more on practical implementations of social influence in society and
how those implementations are pertinent to teaming and teamwork. Thus, this re-
view is structured as follows: (1) an initial review of social influence theory and its
applications; and (2) an overview of how technology mediates social influence. After
reviewing these two concepts, a solid understanding of what social influence is, how
it can benefit or harm a team, and how its effects can be mediated can be achieved.
2.3.1 Social Influence Theory in Teaming
At its core, social influence theory is concerned with how humans are able to
use the influence within their social networks to create change, where this change
often contributes to the motive of the person exerting said social influence [441,
147]. Within social influence, there exist various types of social influence that can be
present in a team. For example, social influence can be direct or indirect [164, 144].
Direct social influence would involve a person directly influencing or manipulating an
individual they would like to see change [148, 120]; however, indirect social influence
would involve manipulating an object or person through a different instance of social
influence [463]. For example, if a person or AI teammate were to deliver a report early
to their manager, they might directly influence their manager’s perception of their
44
work ethic, but they also might indirectly influence other workers into also turning in
reports earlier. Moreover, research has viewed social influence from both a normative
perspective and information perspective. Where informational social influence refers
to the impact new evidence could have on people [211], normative social influence
actually refers to the interpersonal social influence exerted by individuals [68, 315,
225]. For instance, reading a coworkers resume to find out they are a highly qualified
programmer may change your perception of their performance would be considered
informational social influence, while a normative social influence would involve a
coworker demonstrating their programming skills while you both work on a shared
code base [239]. Importantly, receptivity to social influence can actually vary based
on the type of social influence that is exerted, making the differentiation between
these types important [260, 47]. Moreover, individual differences, which are reviewed
in-depth above, also carry an important role in determining the effectiveness of both
applying and reacting to social influence [462]. The above demonstrates that while
social influence is a tangible concept, it can vary from context to context and person to
person as the motives and methods associated with social influence can vary, meaning
the concept can look very different in a teaming setting.
Within teaming, social influence theory has seen a large focus, as its effective
use may be important for effective teaming for a variety of tasks [240, 70]. However,
even within teaming, social influence can take different forms. For example, interper-
sonal social influence can be seen where teammates will try to get other teammates
to help them or complete a task that the influencer needs to complete [202, 477].
Importantly, social influence in teaming is also not always a smooth sailing ship, as
conflict often arises in teams, often due to individuals’ social influence challenging
each other [3, 204, 157]. On the other hand, leadership, an important component to
teaming [409, 191, 114], can be viewed as the explicit organization of social influence
45
often through hierarchy and delegation [76, 182]. The benefit to this explicit orga-
nization can be seen in the handling of influence-based conflicts, as one of the sole
responsibilities of leaders is conflict management [376, 215].
Despite the importance of leadership to teaming, some teams often elect to
self-organize their social and teaming influence and do not utilize explicit leadership,
which allows social influence to be more fluid and dynamic due to a lack of hierar-
chy [294, 181]. However, even teams that do not explicitly use hierarchy for social
influence organization still see those behaviors within individuals as they often occur
naturally in teams [114, 230, 417]. This example demonstrates how social influence
and even influential leadership do not have to be explicitly tied to hierarchical lead-
ership to exist effectively, which is good news for human-AI teaming as leadership is
also underexplored in that domain as well [143].
Thus, as human-AI teaming moves closer to being a reality, the need the
need for explicit consideration of social influence in AI teammates grows as they will
not only need to be influenced but also influence their teammates and even exert
some leadership behaviors [407]. Importantly, social influence, even in AI and
robotic systems, does not have to be represented or moved verbally, which
is not the focus of this dissertation, as it can be leveraged with non-verbal
actions [206]. Despite the above review that demonstrates the importance of social
influence and its relevance to teaming, practical examples of social influence will help
demonstrate the power of the concept when put into practice. Thus, the following
sections, while not explicitly tied to teaming, further explore social influence from the
perspective of those being influenced and those influencing.
46
2.3.1.1 Examples in Imposing Social Influence: Persuasion
While the concept of social influence has seen application, one of the most
applied concepts within the theory is that of persuasion, which refers to the abil-
ity to spread and implant one’s social influence across groups of people [133, 466].
There are a plethora of practical examples of persuasion being used in the real-world,
and they provide excellent examples for how social influence within a person can be
synthesized into different modalities. For example, propaganda created by govern-
ments or organizations serves as a very real and impactful example of the turn of
social influence into persuasive material [205, 373]. For instance, governments and
extremist groups have been able to heavily sway public opinion through the spread
of propaganda, which provides an often malicious method for leveraging social in-
fluence [231, 136, 271]. A more light-hearted form of propaganda, advertising from
organizations, is another practical example of persuasion where companies have the
sole purpose of selling something to you and have created materials to influence and
persuade you into spending money [325, 362, 343]. Furthermore, the visual design and
presentation of this advertising has significant implications for its persuasive power
[290, 62, 103], which has important implications for the design and presentation of
AI systems.
In regard to AI systems, persuasion can be a critical component of leveraging
social influence, and has seen minor exploration outside of teaming [137]. For example,
the concept of persuasion in recommender systems has been studied, as the goal of
such systems is often to encourage users to adopt a prescribed recommendation [11].
However, designing AI systems to be persuasive teammates is still underexplored
as human-AI teaming research is only beginning to take shape [329]. Fortunately,
technology acceptance does provide an important proxy for perceived utility, where
47
one could view utility as enticement for a person to adopt a technology. Although
research work has made AI systems more attractive over the years, more work is
needed to ensure that AI teammates can be attractive and persuasive to their human
teammates, especially given the general low perceptions of AI teammates compared
to human teammates [286, 298]. Importantly, similar to general social influence,
persuasion - even in virtual agents - can be manipulated through the design of non-
verbal queues and is not reliant on dialogue and verbal communication, which is not
the focus of this dissertation [24]. Thus, closing this perception gap, which will affect
social influence, should be contingent on increasing persuasion; however, closing that
gap should come from not only the AI design side but also the human side as well.
2.3.1.2 Hypnotism: I Promise it’s Relevant
While the above subsection explains how social influence can be leveraged by
a user, there is still an important perspective yet to be reviewed, and that is of the
person being influenced. The general term used to determine if someone is going to be
receptive to the social influence leveraged by another person or agent is susceptibil-
ity [314, 49]. Although susceptibility has been studied from a user perspective, such
as in marketing and advertising research, these studies may lack the interpersonal
component that exists within teaming [462]. One of the most interesting examples of
susceptibility that relies on interpersonal connection and could be heavily relevant to
teaming is hypnotism, which is a highly applied form of social influence theory that
balances the persuasion of a hypnotist with the susceptibility of the person being
hypnotized [296, 221, 178]. Moreover, researchers have synthesized this susceptibil-
ity concept into empirical measurements in which specific scales have been made to
identify the potential susceptibility that one would have to hypnotism [461, 179]. Ad-
ditionally, the existence of this susceptibility, although strong in in-person scenarios,
48
can even transcend technological barriers [338, 337], making the concept extremely
pertinent to AI social influence.
Thus, research into the creation of persuasive and influential AI teammates has
to be balanced with research into susceptibility of human teammates, especially given
the almost natural and necessary existence of social influence in teaming settings.
Currently, the concept of susceptibility has not been explored by research, with the
closest semblance being that of the Technology Acceptance Model (TAM), which has
not been related to social influence in AI teammates yet. Thus, since previous research
has empirically synthesized susceptibility to hypnotism, there is also a gap where
research could identify qualities that make humans susceptible to AI social influence.
Moreover, various human factors in teaming may become relevant to this concept,
such as trust, ethics, and team cognition. For instance, a high degree of shared trust in
a team may allow an individual’s technology acceptance to be shared, or high degrees
of team cognition may implicitly share that acceptance across teammates. Thus,
while the concepts of susceptibility have been studied, their application to teaming,
and especially human-AI teaming given the complexity of technology acceptance,
should be explored.
2.3.2 Technology’s Mediation of Social Influence
While the discussion of social influence, its relevance to teaming, its applica-
tion, and its reception are critical to this dissertation, this background review will
conclude on one of the most important social influence factors related to this disser-
tation: the mediation of social influence through technological platforms. Given that
we live in an age of information heavily facilitated by online and digital interaction, a
large portion of modern social influence work has explored how social influence is not
49
limited to face-to-face interaction but can be leveraged and received through digital
platforms [482, 324]. Although technology and digital communication have created
platforms to enable worldwide collaboration, this enablement has also allowed peo-
ple to spread social influence to wider audiences through implementations such as
social influence on social networks [130]. However, this online social influence may
not be as powerful as in-person social influence; an important example of this in
the literature is media richness theory, which dictates how the type of media used
determines its level of social influence [424]. For instance, social influence spread
via face-to-face interaction is often more effective than social influence spread via
email; however, email would be able to reach a larger group of people faster than
face-to-face interaction thus increasing potential total social influence summed across
multiple people [426, 113]. Thus, the selection of media platforms and the design of
communication on them does not have a universal optimum but is rather a choice
that needs to be made by those seeking to influence, such as governments, advertisers,
or AI teammates [440, 15]. While the theory of technology-mediated social influence
is important, practical examples of its application are critical to this dissertations
background, and two of those examples will be discussed below.
Within teaming, technology-mediated social influence has become highly preva-
lent as technology-mediated teaming is becoming heavily normalized within society,
especially given the Covid-19 pandemic [149, 468, 457]. For instance, the use of dig-
ital communication platforms has become a heavy component of modern teaming,
meaning the significance of social influence towards individuals may have dropped.
Managers can now, however, manage larger teams that are not physically collocated,
in turn spreading this weaker form of social influence to a larger team [256]. How-
ever, teams come in all different shapes, sizes, roles, and locations, meaning that
the optimal digital platform or even the use of that digital platform at all is not
50
always universal or constant [405, 25]. Importantly, the use of these digital plat-
forms has also greatly benefited worker social influence and has made it easier for
low-level workers to impose upward social influence directed at management [422].
Thus, technology-mediated social influence and teaming is not a simplistic tool used
by managers to impose social influence, but rather an evolutionary technology that
has allowed teaming and the social influence within it to evolve past hierarchical
and geographical roadblocks. Furthermore, the use of technology mediating teaming
and social influence has an important implication for human-AI teaming: the current
workforce is not only aware but accustom to leveraging and receiving social influence
through digital platforms, which may make them more susceptible to AI teammate
social influence. Additionally, the integration of these agents within existing digital
platforms could, in turn, improve their ease of use and increase their acceptance [141],
and these virtual teammates may be able to extend the social influence already being
leveraged in these digital platforms.
Another example of technology-mediated social influence that is highly rele-
vant to this dissertation is the creation and application of virtual agents as influencers
of social media [86]. Despite social influence in human-AI teaming not being heavily
studied yet, the application of artificial digital influencers can provide an interesting
example for how social influence from virtual teammates may be received. As of writ-
ing this dissertation, the use of social media influencers as marketing tools is not a new
concept, with influencers often creating highly fabricated lifestyles to garner larger
sponsorships and to sell more products [158, 201, 20]. Interestingly, these influencers
are not always required to even present their real-organic selves, as many have begun
creating virtual personas that they present to their followers [257, 265]. However,
recent years have seen companies forgo sponsorship of these “real” influencers and
opt to create purely virtual and fictional digital influencers with the goal of creating
51
more organic and personable advertising [121, 242]. Essentially, this strategy allows
companies to leverage the greater social influence of personable advertising instead
of either paying existing influencers or using non-personal advertising (which would
be less influential based on media richness theory) [291]. Thus, virtual agents are not
only mediating tools of social influence, but are actually actors within the advertising
system that posses a degree of social influence. In terms of human-AI teaming, this
example has an important takeaway that this review should close on: virtual team-
mates are not just mediating tools of human social influence but are actual entities
that posses and leverage social influence for a desired purpose. For social media in-
fluencers, that social influence can be used to sell more products, but for human-AI
teaming, that social influence can be used to complete tasks efficiently.
2.4 Conclusion
The above review of social influence theory and its practical applications
presents interesting research opportunities for human-AI teaming. Although tech-
nology’s mediation of social influence is highly documented, AI represents a shift
in how we interact with technology. Rather than being a simple tool that we use
to leverage social influence, AI, and especially AI teammates, will possess its own
level of social influence that it will need to leverage to accomplish its assigned task.
Moreover, the receptivity of this social influence may not always go over smoothly, as
humans will possess varying levels of susceptibility and different AI systems will be
designed differently, meaning their potential persuasiveness will vary. Furthermore,
the rapid growth of AI systems in society means that AI, even without design changes,
will be garnering more social influence every day, which may instill hesitancy or even
fear towards the technology. Thus, if AI teammates are going to have any chance at
52
being influential in both teams and society, then the following goals, which are being
adopted by this dissertation, need to be ensured:
Goal 1: The contributions of this dissertation to AI teammates should consider how
they impact the persuasiveness and perceived utility of the technology.
Goal 2: The concept of social influence should not only be viewed from an AI design
perspective, but also from humans’ preexisting and general susceptibility to it.
Goal 3: Existing research on social influence theory and technology’s mediation of social
influence should be considered but not strictly held to, as AI teammates will
not exist as tools but rather autonomous entities empowered by technology.
Additionally, the above reviews on human-centered AI and human-AI teaming
clearly demonstrates that human-factors are not only important to general human-
AI research but that the specific domain of human-AI teaming research has realized
this importance and heavily shifted toward the empirical exploration of these human
factors. Unfortunately, as the previous review concludes, these factors cannot be
viewed within a vacuum of each other; thus, as the research on human-AI teaming
grows, it becomes more difficult to holistically study human-AI teams. Therefore,
while this dissertation is considerate of various human-factors, it must scope its focus
around social influence and provide a foundational view of how social influence is
important to human-AI teaming. Thus, based on the conclusions made above, this
dissertation must have goals persistent throughout research that ensure its relevance
to both modern human-AI teaming research and real-world AI systems:
Goal 4: The contributions of this dissertation must be synthesized into design recom-
mendations that are understandable and actionable for AI designers and devel-
opers, as well as AI researchers.
53
Goal 5: The contributions of this research should help further align the concept of ease-
of-use to fit human-AI teaming.
Goal 6: This dissertation must be considerate of how society and an individual’s lived
experiences impact their view, application, and acceptance of AI social influ-
ence.
Adopting the above six goals not only ensures relevancy to each individual field,
but also ensures the merger of these fields within this dissertation. Thus, each study
within this dissertation iteratively works towards these goals to provide a foundational
understanding of social influence in human-AI teaming that can continue to grow and
flourish alongside the growth of the domain itself.
54
Chapter 3
Platform Selection
Studies 1 and 3 of this dissertation are designed to be in-person experiments
that examine the role of social influence in human-AI teaming. To encourage the
synthesis of the results of these two studies, both studies will utilize the same exper-
imental platform and make modifications based on the experimental manipulation.
Additionally, given the growing nature of human-AI teaming, platform selection was
not entirely straight forward as various different teaming environments have been
used. The following details the lengthy process of platform selection and hopes to
enable future research to better identify ideal platforms for studying human-AI team-
work.
As a starting point for platform selection search, this work began by exploring
gaming domains, as they are an area of interest in the human-AI teaming domain
[480], do not require substantial development efforts, and commonly implement team-
ing tasks. Specifically, 2 different types of games were looked at: Cooperative games
where humans work together; and esports titles where teams work together in com-
petition. Additionally, platforms that have been associated with teamwork research
were also reviewed to see if their relevance can transition to this work. A comprehen-
55
sive list of platforms considered along with their pros and cons can be found in Table
3.1.
Category Game Pros Cons
Stardew
Valley
+ Custom Task
+ Slow paced
+ Up to 4
Players
+ Can Grow
Potatoes
- Difficult to
Operationalize
Influence
- No AI
- Not Team
Oriented
Cooperative
Games
Artemis
+ Requires
Teamwork
+ Real-World
Application
- No AI
- Complex Tasks
- Relies on
Communication
Minecraft
+ Custom Task
+ Slow Paced
+ Variable
team size
+ Popular
- Not Team
Oriented
- No AI
- Difficult to
Operationalize
Influence
Fighting
Games
Super
Smash
Brothers
+ Teams can Range
from 1v1 to 4v4
+ Variable AI
Skill Level
- High Learning Curve
- No Structured
Tutorial
- Fast paced
- High Character
Variability
56
Overwatch
+ Large competitive
Scene
+ Team Oriented
+ Simple Base
Mechanics
- Fast Paced
- High Character
Variability
- Limited AI
Modification
- Difficult to
Manipulate
Influence
Team
Tourna-
ment
Shooters
Team
Fortress 2
+ Team Oriented
+ AI Customizeable
+ Simple Base
Mechanics
+ AI Influence can be
Manipulated by
Skill Level
- Fast Paced
- High Character
Variability
- Older and Less
Competitive
Quake
+ Common AI
Testbed
+ Variable
Team Size
- Fast Paced
- Violent
- Doesn’t Require
Teamwork
- Difficult to
Operationalize
Influence
57
Dota 2
+ Team oriented
+ Custom Team
Size
+ Variable AI
Skill Level
+ Popular
- Highly Complex
- High Character
Variability
Multiplayer
Online
Battle
Arena
Games
(MOBAs)
League of
Legends
+ Team Oriented
+ Custom Team
Size
+ Variable AI
Skill Level
+ Popular
- Highly Complex
- Difficult to
Manipulate
Influence
Heroes of
the Storm
+ Team Oriented
+ Simple Base
Mechanics
+ Competitive
- Prefers Constant
Team Size
- Limited AI
- High Character
Variability
- High Influence
Variability
58
Strategy
Games
Starcraft 2
+ Popular AI
Testbed
+ Popular
- Difficult to
Operationalize
Influence
- Highly Complex
- Not Team
Oriented
Sports
Games
Rocket
League
+ Team Oriented
+ Custom Task
+ Popular
+ Large AI
Repository
+ Highly
Custom AI
+ Real-World
Application
+ Highly Variable
Team Size
+ Tutorial
- Fast Paced
59
Arma 3
+ Custom Task
+ Real-World
Application
+ Highly Custom AI
+ Used for
Human-AI
Teaming Research
[259]
- Complex
- Potentially
Violent
- AI Design
Limitations
Past
Research
Platforms
NeoCities
+ Real-World
Application
+ Used for
Human-AI
Teaming Research
[282, 286, 388, 176]
- Difficult to
Operationalize
Influence
- AI Design
Limitations
- Undergoing
Redesign
Blocks
World for
Teams
+ Used for
Human-Agent
Teaming Research
[203, 198, 446]
- Difficult to
Operationalize
Influence
- AI Design
Limitations
Table 3.1: Potential Platforms
Based on the above list of games along with the pros and cons (Table 3.1), a
selection process began. While all the platforms shown in Table 3.1 would be suitable
for general human-AI teaming research, the specific goals of this dissertation mean
60
that one platform would be the best suited. Specifically, there are certain pros that
are must-haves and certain cons that are deal-breakers for this dissertation. There
are two pros that are must-haves: (1) allow for the integration of AI systems as this
work wants to use real, modern AI techniques rather than Wizard of Oz; and (2) The
task within the platform needs to be innately team-oriented to ensure its validity
around social influence in teaming. Additionally, there are specific cons that prevent
a platform from being appropriate: (1) Difficulty in operationalizing or manipulating
(teaming) influence, as that would make the applicability of the platform to the goals
of this dissertation much more difficult to justify; and (2) High character variability,
as it could heavily impact the experiences each participant has. Thus, based on those
initial requirements, 2 platforms were selected for further review: Rocket League and
Arma 3. Rocket League consists of teams playing a small scale soccer game in cars,
while Arma 3 is a military simulation game with tasks revolving around military
scenarios.
Of these two platforms, the additional pros and cons of each were examined
along with their suitability for the research goals of this dissertation. Although Arma
3 has a large amount of customizeability and real-world relevance, ultimately, the
large amount of existing and verified AI teammates, the intrinsicly team-oriented
task, and the ability to rapidly change team size made Rocket League the ideal
platform. It was determined that Rocket League provided the most appropriate test
bed for understanding humans’ acceptance and perceptions of their AI teammate’s
social influence distribution within them. The following describes how Rocket League
handles the above requirements.
61
3.1 Rocket League
3.1.1 Team Tasks
One of the most important aspects in an experimental platform that is used
for teamwork studies is that the task being performed is valid within real-word, eco-
logical teamwork. Using a traditionally individual task would result in trivialized
experiments with obvious answers that potentially lack external validity. This is not
a problem for Rocket League as it is derived from the real-world sport of soccer, which
is traditionally associated with teamwork. Soccer is not only associated with human
teamwork, but the sport has often been seen as a premier destination for robotics via
the robocup [226, 227, 228, 30]. It would make sense to merge these two worlds and
understand human-AI teamwork within the sport of soccer. Rocket League provides
a stable, tested and portable platform capable of providing a simulation environ-
ment for popular sport while also providing a more gamified environment, ideal for
short-term human-AI interactions. Moreover, as a team task that centers around the
manipulation of shared resources, soccer, and thus Rocket League, provide an ex-
cellent operationalization of teaming influence. Thus, the experiments created using
the Rocket League platform are not only centered around a team task but also a
task where both teaming and social influence can be visually represented through the
movement of a shared resource (i.e., the ball). This design will enable participants
to more clearly picture the impacts AI social influence can have on them, and it will
provide a highly visual example of shared teaming influence that can be discussed
during interviews.
62
Figure 3.1: Rocket League Screen Shot
3.1.2 Human-AI Teams
When evaluating and observing human-AI teams, one can either work to build
their own agents or search for capable substitutes. Traditionally, Rocket League
provided bot teammates that could play alongside humans; however, they are not
very capable and are often seen as trivial teammates. Additionally, the only modifi-
cation available to them is the ability to change their skill level from bad to almost
decent, which would make it difficult to modify and manipulate agent behavior for
the purposes of an dissertation. Fortunately, a community officially supported by
Rocket League developers has tackled the task of building bot teammates that are
more capable and even have basic teaming capabilities. Although we could easily
build our own bots, this dissertation seeks to utilize the already established bots pro-
vided openly by members of the community. If modifications are needed or desired,
the existing bots can be edited to adjust agent behavior; for instance, team strategy
can be modified to mediate how agents behave in relation to their teammates. The
functionality of these bots can vary with many of them being high performing all
around players to some bots focusing on single roles, such as goalie or demolition.
The result of these community contributions is a capable experimental platform that
63
has access to bots and bot creation resources far beyond anything we have previously
used or available in other platforms.
Figure 3.2: RL Bot Interface
3.1.3 Different Team Sizes
The final requirement necessary for the experiments in this dissertation is
the ability to adjust the size and composition of the human-AI teams. Fortunately,
the bot creation community has also worked to provide resources in this area as
well. Not only can the number of bot teammates on a team be changed, but the
number of human players can also be changed. Humans can play together either
through local split screen or online connections using a third-party mod that is safe
and common amongst the elite community. Online and remote play requires the
installation of external software, but this is easily manageable if our own systems
are used. Thus, Rocket League is a platform capable of easily facilitating different
human-AI teaming configurations that are often difficult to handle using our previous
platforms.
64
Figure 3.3: RLBot Team Size Modification
3.1.4 Bonus Reasons for Rocket League’s Selection
While the above points outline the specific requirements that Rocket League
meets, there are still some important bonus features that set Rocket League apart as
a superior platform. First, rocket league is a very popular game, and many people
have not only heard of it but also played it. This means we can also get a diverse
sample population in regard to human performance and skill level. This provides an
interesting additional factor we can consider when understanding human acceptance
of AI teammate social influence. For instance, high-performing humans may be less
likely to appreciate AI teammate social influence as they see it as taking away from
their ability to perform; however, there is also the possibility that “Game Recog-
nizes Game” and high-performing humans have a greater respect for high performing
teammates.
Secondly, Rocket League is a very non-offensive and appropriate game. This
is an important factor to consider when thinking about long term publication and
discussion of this research. Unlike some other optional platforms, Rocket League
provides an environment free from complicating factors that may prevent people from
liking the research or participating in the research.
Third, Rocket League provides a robust and verified internal scoring mecha-
nism that provides objective measures for human and agent teammates. This is often
a difficult element to create in a research platform, but Rocket League has spent
years modifying and fine-tuning these performance metrics to ensure they best reflect
65
the performance of individual teammates, whether they are humans or bots.
Finally, as mentioned above, Rocket League provides some important parallels
between real-world sports teams. While this is not a direct comparison, the wealth of
research in sports teamwork can provide an interesting comparison point when talking
about human-AI teams. This is an especially important parallel as robotics research
has similarly targeted sports environments with endeavors such as the robocup.
Based on the above considerations, Rocket League will serve as the ex-
perimental platform for Studies 1 and 3 as both studies are conducting in-person
mixed-methods experiments.
66
Chapter 4
Study 1: Using Teaming Influence
to Create a Foundational
Understanding of Social Influence
in Human-AI Dyads
4.1 Study 1: Overview
Not only is Study 1 the first study of this dissertation, but it is also one of
the first studies to explicitly examine the concept of social influence in human-AI
teaming. The primary goal of this study is to provide the first explicit link between
teaming influence and human-AI team outcomes (dissertation RQ2). Additionally, it
is also important to understand the actual underlying process that transitions team-
ing influence into social influence (dissertation RQ1). Fortunately, Study 1 provides
robust enough data to complete both of these goals. Thus, the reporting of Study 1
is being broken up into two sub-studies, Study 1a and Study 1b, which handle the
67
above two goals, respectively.
Study 1a provides an important understanding of how varying levels of team-
ing influence in human-AI teams ultimately impacts human-AI teams. This under-
standing is critical as the amount of teaming influence AI teammates have stands to
significantly increase in the coming years. This increase will in turn increase the fre-
quency of opportunities AI teammates have to have social influence on their human
teammates. Study 1a examines this potential increase in teaming influence to un-
derstand the foundational impacts of teaming influence that will eventually become
social influence.
For Study 1b, the results of study 1a will be further contextualized through an
in-depth analysis of participant interviews to understand how teaming influence ulti-
mately becomes social influence. Specifically, this explicit exploration is necessary as
AI teammates present themselves as an exceedingly new technology that is not only
unfamiliar to the research community but also to the general population. Addition-
ally, given this novelty, it would be quite difficult to collect these general perceptions
through generic interviews, but tying interviews with a task that involves human-AI
teaming would create a context that enables a more robust discussion on AI team-
mate social influence. Thus, Study 1b provides foundational answers on how teaming
influence becomes social influence, which consists of the requirements necessary to
facilitate this transition.
Given that these two sub-studies share similar contexts and data collection
procedures, the structure of this chapter is as follows: (1) task, participant, and
measurement information shared by Study 1a and 1b; (2) Study 1a research questions,
experimental details, and results; and (3) Study 1b research questions, qualitative
methods, and results. The following section details the shared content and context
for both of these studies.
68
4.2 Study 1a & 1b: Task
4.2.1 Basic Task
Study 1 revolves around a 2v2 game of Rocket League in which a human
worked with a bot teammate to face 2 bot opponents. While dyads present a unique
teaming environment, the goals of Study 1 are better suited for a dyad as it avoids the
confounding variable of there being varying numbers of bot and human teammates
or the presence of competing teaming influence between humans, a factor that will
be more thoroughly explored in later studies.
Upon starting Study 1, participants took multiple pre-surveys and participated
in a brief training session. Training included completing the official tutorial provided
by Rocket League and playing a free-play session for three minutes that allowed
participants to practice at their own pace. Participants were tasked with playing three
5 minute games of Rocket League for each within subjects condition (discussed below)
for a total of six five-minute games. After participants completed a set of three games,
they were provided with multiple post-task surveys and completed an interview. The
surveys completed only referred to the third game played. The interview conducted
covered all three of the games played as it focused on the user experience and the
changes they encountered throughout their three games, which would be difficult to
capture via survey metrics. Importantly, it was decided not to provide surveys after
each game as it would have caused fatigue within participants, and the interview data
would provide a more robust stand-in for that data as well. Finally, after completing
their first three games, surveys, and interviews, participants were then tasked with
doing those three steps over again but with a different within-subjects condition.
69
4.2.2 Reducing Task Pace
While Rocket League provides a large amount of advantages as a research
platform, one of the main drawbacks is its fast paced nature. While it would have
been a reasonable compromise to have participants play a fast paced game of Rocket
League, efforts could still be made to make the task more accommodating. The
modification made to the task was to select opponents that would only be goalies and
not actively offensive. While these goalies were able to score and perform kick-off
normally (a fact participants were made aware of), their primary role was to guard
the goal.
In addition to slowing down the pace of the game, task design also provided
two additional benefits: (1) it allowed participants to better separate perceptions of
opponents and perceptions of teammates; and (2) it made teaming influence easier
to operationalize in an AI teammate. These two benefits created a research platform
that is ideal for both sub-studies of Study 1.
4.2.3 AI Teammate Selection
For the purposes of Study 1, it was decided to utilize the same agent system
for each condition level and simply modify their teamwork strategy to operationalize
teaming influence. This decision ensured that the actual mechanical behavior and skill
of teammates was not impacted at different teaming influence levels, which would in
turn confound the results of the study. The pool of agents that were available for
selection came from a repository published by the group that develops the RLbot
platform. The agents in this repository are past models of highly competitive bots
that have competed in multiple tournaments, and the repository is published under
an MIT License making it available for modification and use [385]. Thus, initial agent
70
selection was conducted with the goal of identifying a single agent platform that met
two criteria: (1) the agent needed to be mechanically capable to ensure teaming
influence changes are advantageous; and (2) the teaming strategy of the agent needed
to be easily modifiable to ensure mechanical functionality is not impacted.
Rather than finding the greatest agent available, the goal of answering criteria
(1) was to reduce the potential pool of available agents, thus making it easier for the
research team to identify agents that meet criteria (2). For criteria (1), past tour-
nament performance for each of the potential agents was examined, and each agent
was placed into a scrimmage and observed by researchers. Capabilities such as shot
accuracy, positioning ability, aerial ability, and effective environmental usage were
examined. After a few rounds of observation, a handful of agents were determined to
be capable enough for the experiment. As a side note, during this process, an agent
explicitly designed to be a passive goalie system was discovered, and it was deter-
mined that the participant’s team would face off against two passive goalies rather
than two active opponents to ensure the participant’s teammate was more visible.
Afterward, researchers explored which agent best-met criteria (2). Researchers
explicitly looked for agents that separated their strategy logic away from their more
mechanical movement logic, which would help compartmentalize changes to team-
ing influence and prevent those changes from impacting mechanics. Additionally,
researchers who worked on this project are most comfortable with Python-based plat-
forms so the implemented language of the agent was also a consideration for criteria
(2). After careful consideration, the agent platform chosen was named “Botimus
Prime.” The selected agent had a stellar tournament performance (including winning
the 2020 2v2 Rocket League tournament), is implemented in Python, and explicitly
utilizes a teamplay script to determine which teammate should prioritize going for
the ball and how the agent should act based on that decision.
71
4.2.4 Operationalizing Teaming Influence in AI Teammate
Code
With the optimal agent for the experiment identified and chosen, work then
turned to operationalizing and modifying the teaming influence presented by an AI
teammate. It was discovered that AI decision-making was done by looking at how each
team member could effectively intercept the current path of the ball. The agent would
determine time calculations for these intercepts, and whether they would require any
inefficient changes in behavior, such as turning around or stopping. Based on this
information, the AI teammate (i.e. “Botimus Prime”) would decide whether to go
for the ball or stay back, wait, and observe on defense.
Based on these decision-making points, the operationalization and
manipulation of teaming influence was implemented by modifying the fre-
quency at which an AI teammate imposes teaming influence, and in turn
social influence. For instance, a highly influential teammate would go for
the ball more often and take shots frequently, but a lowly influential team-
mate would hang back more often. This manipulation was implemented in two
specific programmatic changes. First, the time-to-intercept was modified to appear
shorter for agents with higher teaming influence and longer for agents with lower
teaming influence. This modification ensured that teammates were not making non-
sensical decisions, such as turning around to take a bad shot; however, this change
ensured that high levels of teaming influence led to more frequent utilization of said
teaming influence.
Secondly, when determining who should stay in the backfield, the distance
to one’s own goal was the determining factor. Lower levels of teaming influence
teammates were more likely to play in the backfield with higher teaming influence
72
more heavily favoring mid and forward positions. It is important to note that playing
back can be advantageous to performance as one can follow-up on missed or blocked
shots; however, the overall amount of teaming influence on the shared resource (i.e.
the ball) still predominantly favors high levels of teaming influence. Pilot testing was
conducted with 6 volunteers with varying skill levels to determine the actual values
and scales of the two biases implemented to ensure the manipulation was visible
without creating agents that had significant losses in performance. The result of
that process was the creation of three agents that impose teaming influence at three
different frequency levels: low, medium, and high.
4.3 Study 1a & 1b: Participants and Demograph-
ics
Participants for this study were recruited through a university subject pool.
The task was designed to be completed within 2 hours with participants receiving an
extra credit point for every 15 minutes of the experiment, totaling 8 credits. While this
demographic can somewhat limit the age range of our data, college-aged individuals
represent a population that is not only in touch with modern technology but also
beginning to think of their career decisions. Thus, the data collected from them
provides a highly relevant look at how AI’s future can directly impact the modern
workforce. This limitation is reduced in Study 2, which explicitly looks at broader
populations. In total, a power analysis confirmed that with the within-subjects design,
32 participants in each condition would be needed to reach a reasonable power for
a medium effect size of the interaction effect detailed below. However, during data
collection, one participant’s task data was lost so a 33rd participant was run. The
73
lost data will be included in the qualitative results but not in the quantitative results.
The demographic information for this study can be viewed in totality in Table 4.1.
Data collection was conducted during the COVID-19 pandemic, and measures
were taken to ensure the safety of participants and researchers. Specifically, partici-
pants and researchers were required to wear masks and had to have tested negative
for COVID-19 within a week of the experiment.
4.4 Study 1a & 1b: Measurements
Study 1a utilized a mixed-methods design and Study 1b was purely qualitative,
but they shared the same data collection procedure. The data collection process was
broken up into four different sections: pre-task questionnaires; task-derived measure-
ments; post-task questionnaires; and post-task interviews. Each of these components
and the combination of them are critical to answering this study’s research questions.
The below subsection details the measurements used in the order they were provided
to participants. The following survey measures can be found in Appendix A.
4.4.1 Pre-Task Questionnaires
4.4.1.1 Demographics
Pre-Task questionnaires targeted participants’ prior experiences and were used
to understand if perceptions formed before the study impacted their perception of AI
teammate teaming influence. During this step, demographic information for partic-
ipants was also collected, including age, gender, and education level. In addition to
standard demographic information, participants were also asked about their prior ex-
perience with Rocket League as this experience may indicate varying skill levels and
74
Participant List and Demographic Information
ID Gender Age Ethnicity
Video Game
Experience
Rocket League
Experience
P01 Female 19 Latino or Hispanic None at all Never
P02 Male 18 Caucasian A good amount A few times a year
P03 Female 18 Caucasian Some Never
P04 Female 19
Black, Asian,
Caucasian, Pacific Islander
A good amount Never
P05 Female 18 Caucasian Some Never
P06 Female 20 Caucasian Some Not in a long time
P07 Male 18 Caucasian A good amount Not in a long time
P08 Female 18 Caucasian None at all Never
P09 Male 19 Caucasian Some Not in a long time
P10 Male 19 Latino or Hispanic Some Not in a long time
P11 Female 18 Caucasian Some Never
*P12 Male 18
Caucasian,Asian
A good amount Not in a long time
P13 Female 18 Caucasian None at all Never
P14 Male 18 Caucasian A lot A few times a year
P15 Female 18 Asian Some Not in a long time
P16 Female 18 African-American Some Never
P17 Female 18 Caucasian None at all Never
**P18 Male 21 Caucasian A lot Almost every day
P19 Male 18 Caucasian Some Not in a long time
P20 Male 20 Latino or Hispanic Some Not in a long time
P21 Female 18 Caucasian None at all Never
P22 Female 18 Caucasian None at all Never
P23 Female 18 Caucasian None at all Never
P24 Female 18 Caucasian None at all Never
P25 Female 18 Caucasian None at all Never
P26 Female 18 Caucasian Some Never
P27 Female 18 Caucasian None at all Never
P28 Male 22 Caucasian A lot A few times a month
P29 Male 18 Caucasian Some Not in a long time
P30 Male 20 Caucasian A lot Not in a long time
P31 Female 18 Caucasian Some Not in a long time
P32 Female 18 Caucasian Some Never
P33 Female 21
Caucasian,
Latino or Hispanic
None at all Never
*Task data lost and removed from quantitative results
**P18 indicated that they are a ranked in the top 1% of competitive Rocket League.
Table 4.1: Study 1 Participant Demographics
75
could impact participants’ perceptions regarding AI teammate teaming and social
influence.
4.4.1.2 Negative Attitudes Towards AI
After demographic information, participants were asked about their percep-
tions of AI systems in society. Specifically, participants were asked about their pos-
sible negative attitudes towards AI systems using a modified version of the Negative
Attitudes Towards Robots (NATS) survey [316]. The survey was modified to target
AI teammates as opposed to general robotics. This survey consists of fourteen, five-
point Likert scale questions that elicit possible preconceived negative emotions that
humans may have for AI systems. Answers from these questions are summed with a
higher score denoting a more negative general attitude towards AI systems.
4.4.1.3 Disposition to Trust AI
After the NATS, participants were asked about their disposition toward trust-
ing AI systems using the Merrit Trust Disposition scale [289], which consists of six,
five-point Likert scale questions. Higher scores on this survey denote that a partic-
ipant has a higher disposition to trust AI and machine systems, which may impact
their ability to trust and accept their AI teammates.
4.4.2 Task-Derived Measurements
Scoring data was derived from each of the games played by the participants.
Teammates were rewarded with points for defending their goal, taking shots on the
goal, scoring points, and handling the ball efficiently. This data is important as it
allowed insights to be created around how increases in AI teammate teaming influ-
76
ence may impact human, AI teammate, and team performance, either positively or
negatively. This data was displayed at the end of each game, and the experimenter
recorded the individual scores for each teammate along with the team’s overall score.
A player’s total score is the aggregate of their ball movements, goals, blocks, and
repeated goal bonuses. However, these factors heavily bias score towards goals and
minimize the contribution of productive ball movements, which is more characteristic
of the teaming influence observed in this study. Thus, the player and bot score was
normalized to remove scoring weights so that score denotes the productive manip-
ulation of shared resources toward goals. Additionally, using this normalized data,
improvement values were calculated for each participant to understand how well they
improved from round to round. Improvement and the normalized player score be-
came the predominantly important measurements within this study. However, all
of the above measurements were collected for each round meaning the fidelity and
robustness of task-related data are high.
4.4.3 Post-Task Questionnaires
Post-task surveys were provided to users a total of two times, once after each
set of three games. Participants were asked to answer the surveys regarding the most
recent of the three games they played. This choice is explained later when the design
of Study 1a is discussed.
4.4.3.1 Perceived Teammate Performance
Participants were asked about their perceptions regarding the performance of
the most recent AI teammate they worked with. Perceived teammate performance
was measured using twelve, five-point Likert scale questions that centered around the
77
ability of a teammate to accomplish their assigned task and effectively operate in team
interaction [101]. Scores were summed, with higher scores denoting that participants
perceived their AI teammate as having had a better performance.
4.4.3.2 Perceived Teammate Trust
Participants’ trust in their AI teammate was also measured. This metric
was originally created to be paired with the pre-task measure on trust disposition;
however, it also serves as a validated metric for measuring participants’ trust in AI
systems. Participants were asked six, five-point Likert scale questions about their
experience and perceived trust with the AI teammate [289]. Scores were summed
with higher scores denoting that participants had greater levels of trust for their AI
teammate.
4.4.3.3 Perceived Team Effectiveness
Additionally, participants perceived performance of their entire team was mea-
sured. Participants completed eight, five-point Likert scale questions that target the
overall effectiveness of teammate interactions and the quality of the team’s perfor-
mance [358]. Higher scores denote higher perceived team effectiveness.
4.4.3.4 Perceived Workload
Participants were also asked about the overall workload in completing the final
game of the task. Workload was measured using the NASA Task Load Index (TLX),
which consists of six, twenty-one point scale questions that asked participants about
mental workload, success, pacing, and other factors that contribute to the overall
workload and effort required to complete a task [172]. Higher scores denote that a
participant perceived a greater workload when completing the task.
78
4.4.3.5 Perceived Social Influence and Power
The final quantitative measurement used measured the perceived social influ-
ence and power participants thought they had in their team. Perceived social influence
was measured using a modified version of a perceived social influence in martial re-
lationships scale [378]. Modifications were made to specifically target teaming and
agent relationships as opposed to marital relationships. Participants were asked five,
seven-point Likert scale questions that centered around the social influence they per-
ceived their AI teammate to have and how they responded and coped with that social
influence [378]. Scores were summed with higher scores denoting that the human per-
ceived that they had a greater level of social influence in the team.
4.4.3.6 AI Teammate Acceptance
As the acceptance of AI teammates is critical to this dissertation, it is also
important to quantitatively measure the acceptance of the AI teammates participants
work with. Unfortunately, given that AI teammates are in their infancy, the measure-
ment of their acceptance is not entirely exact and needs to be adapted from existing
technology acceptance measures. The most generalizable and applicable measure-
ments of acceptance utilize multiple Likert scales to rate the perceived qualities of
technology, such as utility, desirableness, and efficiency [447]. Higher scores denote a
higher degree of acceptance of the AI teammate.
4.4.4 Post-Task Interview
In addition to post-task quantitative measurements, an interview was con-
ducted with each participant after each set of three games. Thus, two 15-minute
interviews were conducted with each participant. These interviews were specifically
79
designed to discuss the perceptions and feelings participants had for their AI teams
during the task and how those perceptions were affected by changes in AI teammate
teaming influence. Two different scripts were created based on the within-subjects
conditions of Study 1a (discussed in more detail later): (1) a static condition script;
and (2) a dynamic condition script. Each script targeted specific, unique aspects of
the condition to provide high-fidelity data that would not be possible to gather from
traditional quantitative measurements.
For (1), questions were more geared towards determining which teammate was
viewed as having a greater level of social influence and why that perception exists.
Participants were asked if they felt this relationship was always constant or if there
were instances within a task where they felt one teammate had more social influence
than the other. Participants were also asked if they felt the current relationship
met their expectations for teammates and if they would change anything about their
teammates in the future. Participants were also asked if these feelings were context-
specific or if there were contexts where they think the social influence relationship
should differ from the one they experienced. Additionally, participants were explicitly
asked if they would feel comfortable giving the AI teammate more or less teaming
influence than they had in the task. Participants were also asked about how easy or
difficult they found it to adjust to their AI teammate throughout the three games and
if they felt they achieved a good symbiotic relationship by the end of the three games.
Participants were asked about how their relationship may be differently perceived if
it were done in a physical environment as opposed to in a video game. The questions
above, and other sub-questions, provide critical insights into how humans perceive
and react to social influence in Human-AI teams as well as various factors that may
impact those perceptions.
Interview (2) is much more targeted toward understanding how humans feel
80
about agents having varying levels of teaming influence when teaming with humans.
The interview was structured similarly to interview (1), but it was more concerned
with how participants reacted to the overall trends in their AI teammate. Early
questions dealt with how participants felt changes in teaming influence impacted
their ability to adjust to their AI teammate. Then, questions started targeting the
actual teaming influence trend and if they felt that trend was sustainable or if they
would like to prevent their AI teammate from gaining or losing any more teaming
influence. Participants were then asked about their comfort with these changes in
teaming influence existing in real-world scenarios, such as within an office setting.
Finally, participants were asked about how these perceptions may be affected by the
fact that their teammate was an AI teammate as opposed to a real human. The
answers provided in this interview were critical in pinpointing how a change in AI
teammate social influence may impact the real world through humans’ interactions
with AI teammates.
4.5 Study 1a: Overview and Research Questions
Study 1a of this dissertation is focused on primarily answering dissertation
RQ2. This study utilizes a mixed-methods experimental design to understand how
humans react to varying amounts of teaming and social influence from AI teammates.
This understanding is critical as the amount of teaming and social influence AI team-
mates attempt to apply will increase in the coming years, meaning the importance
of the answers created by this study will continue to grow in relevance. Thus, the
following research questions have been made for Study 1a based on the motivations,
gaps, and prior work discussed in this dissertation:
RQ2.1 How does the amount of teaming influence imposed by an AI teammate change
81
human performance and perception?
RQ2.2 How do experienced variations in the amount of teaming influence imposed by
an AI teammate change human performance and perception?
4.6 Study 1a: Experimental Design
As mentioned above, Study 1a focuses on how social influence between humans
and AI systems can change based on changes in the number of times a single AI system
attempts to impose teaming influence. As such, the experimental conditions and
design of this study center around that concept. Specifically, this study utilized two
different experimental conditions: (1) the level of teaming influence an AI teammate
has, and (2) whether or not that teaming influence is static or dynamic. These
experimental conditions and their levels are shown in Table 4.2 and discussed below.
For condition (1), this study examines high and low amounts of teaming in-
fluence applied by an AI teammate. Condition (1) is a between-subjects condition,
meaning that each participant was assigned a high or low level of teaming influence in
their AI teammate. The assignment of these conditions was fully randomized across
all participants. For condition (2), this study looks at whether the level of teaming
influence an AI has is either static from game to game or dynamic. In other words,
condition (2) provides a stand-in for the gradual change in AI teaming influence that
society will see in the future and synthesizes it to a small scale. Thus, a static condi-
tion would always see an AI teammate’s teaming influence at its assigned condition,
but a dynamic would either move up or down towards that level throughout the three
games played. Condition (2) is a within-subjects condition, which means each par-
ticipant played two sets of three games, once with a static teammate and once with
a dynamic teammate. The order in which these conditions were given to participants
82
Figure 4.1: Experimental Procedure for Study 1a
was randomized on a per-participant basis to normalize any learning biases. The
decision to go with a mixed-within-between design was made as it heavily benefits
the qualitative data collected in two key ways: (1) multiple interviews can be given
that allow participants to discuss how social influence impacted them over time, and
(2) having an interview and survey for each within condition allows a more critical
and contextual comparison between conditions.
Based on the above design of condition (2), it is important to update the
labeling for condition (1). Rather than being the general level of AI teammate teaming
influence, condition (1) refers to the final level of AI teammate teaming influence
humans experience across their three games. Based on this design, participants in
the high condition during their dynamic condition interacted with a teammate with
low, normal, and high levels of teammate teaming influence in that order, and a
low teaming influence condition did the opposite. The survey design of this work
also ensures that perceptions gathered only refer to the between-subjects condition
as well. The procedure that utilized these experimental conditions is visualized in
Figure 4.1.
83
Condition 1: Final Levels of Teammate Teaming Influence
High Target AI Teammate Teaming Influence
Low Target AI Teammate Teaming Influence
Condition 2: Variability in Teaming Influence Across 3 Games
AI Teammate Teaming Influence is Dynamic (Increases or Decreases)
AI Teammate Teaming Influence is Static
Study 1a: Study Design Matrix
Final Teaming Influence Level (Between)
Dynamic
Teaming
Influence
(Within)
Targeting Low Team-
ing Influence + Static
Teaming Influence
Levels
Targeting Low Team-
ing Influence + Dy-
namic Teaming Influ-
ence Levels
Targeting High Team-
ing Influence + Static
Teaming Influence
Levels
Targeting High Team-
ing Influence + Dy-
namic Teaming Influ-
ence Levels
Table 4.2: Study 1 2x2 experimental design.
84
4.7 Study 1a: Results
4.7.1 Quantitative Results
The quantitative results are presented by dependent variable with descriptive
statistics of mean and standard deviation reported for significant findings. All sta-
tistical assumptions for tests used (i.e., normality, homoscedasticity) were met unless
otherwise stated.
4.7.1.1 Performance
The first section addresses aspects of RQ1 and RQ2, which sought to determine
how the amount of teaming influence imposed by an AI teammate and how variations
in that teaming influence affected human performance in human-AI teams.
Score. A 2 (AI teaming influence: High, Low) x 2 (AI Variability: Dynamic, Static)
x 3 (Round: 1, 2, 3) mixed repeated-measures analysis of covariance (RMANCOVA)
was conducted to assess the effect of AI teaming influence (between-subjects), AI
variability (within-subjects), and round (within-subjects) on participants’ score while
controlling for prior video game experience (see Table 4.3 for descriptive statistics).
The test indicated a significant main effect of AI teaming influence on score (F (1,
59) = 4.90, p = .031, η
2
= .31; see Figure 4.2a), such that participants working
with the high teaming influence AI teammate (M = 239.43, SE = 18.17) had lower
scores than those working with low teaming influence AI teammates (M = 271.55,
SE = 18.17). The main effect was qualified by an ordinal interaction effect between
AI teaming influence and AI variability (F(1, 59) = 4.86, p = .031, η
2
= .08; see
Figure 4.2b). The simple main effects of AI teaming influence indicated that there
was no significant difference in score between the two AI teaming influence conditions
85
in the dynamic AI variability condition (F (1) ¡ 0.001, p = .995). However, in the
static AI variability condition participants working with the low AI teaming influence
teammate had significantly greater score than those working with the low AI teaming
influence teammate (F (1) = 10.50, p = .003). This simple main effect indicated that
AI teaming influence played a significant role on participants’ score only when AI
variability was static and that low AI teaming influence resulted in the highest score
in the static condition.
Table 4.3: Descriptive statistics for score.
Round Between Within M SD N
Round 1 High Dynamic 118.8750 79.5486 16
Static 66.5000 60.7706 16
Low Dynamic 88.8750 91.1350 16
Static 142.3750 84.9289 16
Round 2 High Dynamic 110.3750 67.1882 16
Static 92.8750 98.2486 16
Low Dynamic 103.8750 98.6697 16
Static 153.4375 134.1293 16
Round 3 High Dynamic 107.8125 151.4940 16
Static 80.7500 70.5932 16
Low Dynamic 144.6875 115.2370 16
Static 136.6250 109.5146 16
Additionally, there was a significant three-way interaction (F (2, 118) = 3.20,
p = .044, η
2
= .05; see Figures 4.2c and 4.2d). For this three-way interaction, the
simple main effects of AI teaming influence revealed that participants’ working with
the dynamic AI teammate had no significant score differences between the high and
low AI teaming influence conditions in Round 1 (F (1) = 2.38, p = .134), Round 2
(F (1) = .081, p = .779), or Round 3 (F = 1.13, p = .297). However, participants’
score when working with the static AI teammate was significantly higher for the
low AI teaming influence teammate in Round 1 (F (1) = 18.45, p ¡ .001), Round
86
2 (F (1) = 6.51, p = .016), and Round 3 (F (1) = 5.16, p = .031). These simple
effects of the interaction indicate that participants score was significantly affected by
Round, AI teaming influence, and variability level, such that low AI teaming influence
teammates with static variability produced the highest score results. Additionally,
the simple main effects of AI variability showed that score was significantly higher for
participants’ working with the dynamic AI teammate in Round 1 when it had a high
level of teaming influence (F (1) = 11.75, p = .002) but this effect was reversed in the
low AI teaming influence condition as the static AI teammate was associated with
significantly higher score in Round 1 (F = 6.30, p = .018) with all other comparisons
showing no significant differences. Lastly, the main effects of round (F (2, 118) =
0.72, p = .485, η
2
p
= .01), AI variability (F (1, 59) ¡ .001, p = .982, η
2
p
¡ .01), and the
interaction effects between round and AI teaming influence level (F (2, 118) = 1.06,
p = .350, η
2
p
= .02), and round and AI variability (F (2, 118) = 1.91, p = .152, η
2
p
=
.03) were all insignificant.
Improvement. A 2 (AI teaming influence: High, Low) x 2 (AI Variability: Static,
Dynamic) mixed ANCOVA was conducted to assess the factors effect on the partic-
ipant’s performance improvement while controlling for prior video game experience
(see Table 4.4). The main effects of AI teaming influence (F(1, 59) = 1.94, p =
.169, η
2
p
= .03) and AI variability (F (1, 59) = 1.16, p = .284, η
2
p
= .02) were each
insignificant. However, the interaction effect between AI teaming influence and AI
variability was significant and disordinal (F (1, 59) = 6.68, p = .012, η
2
p
= .10; see
Figure 4.3). The simple main effects of AI variability revealed that there was a sig-
nificant difference in performance improvement between the two AI variability levels
in the low AI teaming influence condition (F (1) = 6.70, p = .012) with dynamic AI
variability producing the best improvement, with no significant differences within the
87
(a)
(b)
(c)
(d)
Figure 4.2: AI teaming influence and variability’s effect on participants’ scores dis-
playing the main effect of teaming influence (Figure 4.2a), the interaction effect be-
tween teaming influence level and variability (Figure 4.2b). Figures also display the
three way interaction between round, teaming influence level, and variability with
Figure 4.2c showing the low AI teaming influence condition and Figure 4.2d showing
the high AI teaming influence condition. Error bars represent 95% confidence inter-
vals.
high AI teaming influence condition (F (1) = 1.13, p = .291). This result indicates
that participants got significantly better at the task when the teaming influence of
their AI teammate changed across the rounds and that participants had the biggest
improvement when their AI teammate trended towards low levels of teaming influ-
ence.
88
Table 4.4: Descriptive statistics for score difference.
AI Teaming Influence AI Variability M SD N
High Dynamic 11.0625 101.0383 16
Static 14.2500 40.3906 16
Low Dynamic 55.8125 58.0485 16
Static 5.7500 61.2313 16
Figure 4.3: Interaction effect between AI teaming influence and variability on partic-
ipants’ score difference. Error bars represent 95% confidence intervals.
4.7.1.2 Perceptions
The second section of the quantitative results formally investigates the other
half of RQ1 and RQ2, specifically looking at the effect of AI teammate teaming
influence level and variability on perceptions like trust, efficacy, and workload.
Cognitive Workload. A 2 (AI Teaming Influence: High, Low) x 2 (AI Variability:
Static, Dynamic) mixed ANCOVA was conducted to assess the factors effect on the
participants’ cognitive workload while controlling for prior video game experience
(see Table 4.5). The main effect of AI teaming influence was significant (F (1, 59)
= 4.60, p = .036, η
2
p
= .07; see Figure 4.4a) with participants working with the
high teaming influence AI teammate (M = 56.50, SE = 3.36 ) experiencing a higher
cognitive workload than those working with the low teaming influence AI teammate
89
(M = 46.31, SE = 3.36 ). The main effect of AI variability (F (1, 59) = 0.40, p =
.530, η
2
p
= .01) and interaction effect between AI variability and AI teaming influence
(F (1, 59) = 0.12, p = .734, η
2
p
¡ .01) were each insignificant.
Table 4.5: Descriptive statistics for workload.
AI Teaming Influence AI Variability M SD N
High Dynamic 55.8125 23.4839 16
Static 57.1875 22.4060 16
Low Dynamic 44.0000 19.8997 16
Static 48.6250 25.2794 16
Perceived AI Teammate Efficacy. A 2 (AI Teaming Influence: High, Low) x
2 (AI Variability: Static, Dynamic) mixed ANCOVA was conducted to assess the
factors effect on the participants’ perception of the AI teammate’s performance while
controlling for prior video game experience. The main effects of AI teaming influence
(F (1, 59) = 0.48, p = .493, η
2
p
= .01), AI variability (F (1, 59) = 1.01, p = .320, η
2
p
= .02), and their interaction effect (F (1, 59) = 3.20, p = .079, η
2
p
= .05) were all
insignificant.
Trust in the AI Teammate. A 2 (AI Teaming Influence: High, Low) x 2 (AI
Variability: Static, Dynamic) mixed ANCOVA was conducted to assess the factors
effect on the participants’ trust in the AI teammate while controlling for prior video
game experience. The main effects of AI teaming influence (F (1, 59) = 0.77, p = .383,
η
2
p
= .01), AI variability (F (1, 59) = 1.74, p = .193, η
2
p
= .03), and their interaction
effect (F (1, 59) = 0.69, p = .410, η
2
p
= .01) were all insignificant.
Team Efficacy. A 2 (AI Teaming Influence: High, Low) x 2 (AI Variability: Static,
Dynamic) mixed ANCOVA was conducted to assess the factors effect on the partici-
90
(a)
(b)
Figure 4.4: Main effect of AI teaming influence level on participants’ perceived work-
load level (Figure 4.4a) and the main effect of AI teaming influence on the par-
ticipants’ perceived level of teaming influence in comparison to their AI teammate
(Figure 4.4b). Error bars represent bootstrapped 95% confidence intervals.
pants’ perception of team efficacy while controlling for prior video game experience.
The main effects of AI teaming influence (F (1, 59) = 1.01, p = .320, η
2
p
= .02), AI
variability (F (1, 59) = 0.18, p = .669, η
2
p
¡ .01), and their interaction effect (F (1, 59)
= 0.31, p = .583, η
2
p
= .01) were all insignificant.
Perceived Social Influence in Comparison to the AI Teammate. A 2 (AI
Influence: High, Low) x 2 (AI Variability: Static, Dynamic) mixed ANCOVA was
conducted to assess the factors effect on the participants’ perception of the AI team-
mate’s level of social influence while controlling for prior video game experience (see
Table 4.6 for descriptive statistics). The main effect of AI teaming influence level
(F (1, 59) = 4.86, p = .031, η
2
p
= .08) was significant; with participants working with
a high teaming influence teammate perceiving lower levels of personal social influence
(M = 18.56, SE = 0.89 ) and those working with a low teaming influence teammate
perceiving higher levels of personal social influence (M = 21.16, SE = 0.89 ). How-
ever, AI variability (F (1, 59) = 1.08, p = .305, η
2
p
= .02) and the interaction effect
91
(F (1, 59) = 0.12, p = .731, η
2
p
¡ .01) were both insignificant.
Table 4.6: Descriptive statistics for perceived social influence in comparison to the
AI teammate.
AI Teaming Influence AI Variability M SD N
High Dynamic 17.7500 4.0083 16
Static 19.3750 5.6554 16
Low Dynamic 20.7500 5.4833 16
Static 21.5625 4.8162 16
AI Teammate Acceptance. A 2 (AI Teaming Influence: High, Low) x 2 (AI
Variability: Static, Dynamic) mixed ANCOVA was conducted to assess the factors
effect on the participants’ acceptance of the AI teammate while controlling for prior
video game experience. The main effects of AI teaming influence (F (1, 59) = 0.41,
p = .523, η
2
p
= .01), AI variability (F (1, 59) = 0.20, p = .658, η
2
p
¡ .01), and their
interaction effect (F (1, 59) = 0.62, p = .433, η
2
p
= .01) were all insignificant.
92
4.7.2 Qualitative Results: Why Humans can Perceive Vary-
ing Levels of AI Teaming Influence Differently
The insignificance in the impacts of AI teammate teaming influence level of
human perception is highly interesting given the observed, significant impact of AI
teammate teaming influence on actual human performance. The qualitative results
presented demonstrate that the perceptions humans form are not exclusively driven
by the impacts teaming influence has on human performance. Rather, the below re-
sults demonstrate that if an AI teammate uses their teaming influence to benefit the
personal goals of a human teammate, then humans will form acceptance for said AI
teammate’s teaming influence. The following discusses the three prominent findings
of the interviews conducted, including: (1) how humans use their personal motiva-
tions and goals to determine their ideal teaming influence level of an AI teammate;
(2) how humans use their personal motivations to determine their ideal for how AI
teammates should change their teaming influence level; and (3) factors humans will
also consider when determining their ideal AI teammate teaming influence level in
real-world contexts. As a note, for this section, the following labeling will be given
with each quote (PID, Between Condition, Post-Within Condition Interview).
4.7.2.1 The personal motive and goal of human teammates dictate their
ideal level of AI teammate teaming influence.
The most predominant finding of the interviews conducted by this study is
the importance of personal motivations in determining a human’s preference for how
much teaming influence their AI teammate has. This finding helps explain the in-
significance found in our perception results as the perceptions humans form are not
solely determined by teaming influence level but rather by the alignment of AI team-
93
mate teaming influence with a human’s personal motivation. In a team setting, this
poses a complication as individuals and their team’s goals may not be entirely the
same. Participants, such as P28 and P33, explicitly noted that their preference for
an AI teammate would change based on their personal motivation:
I’m just thinking in terms of like, what I need to win. In that standpoint,
I want teammate number three but say like, I wanted to practice I would
probably want number one because I thought they had the least influence.
(P28, High, Dynamic)
When I was trying to maximize winning, I wouldn’t go to the ball, I would
just let them do the whole thing, because I knew I wasn’t very good at
it. But then to maximize my enjoyment, I’d probably keep going and see
how many times I could touch the ball even if it wasn’t going in the goal.
(P33, High, Static)
In the short term, this misalignment is less important, as some participants
even noted the importance of having a highly influential teammate in the early stages
of teaming to help “set the tone.” In the long term, however, if an alignment is
not achieved between these motivations and AI teammate teaming influence, then it
will lead human teammates to lose motivation. Participants P02 and P26 echo these
sentiments when they signaled how they eventually gave up due to this misalignment:
I noticed how in the beginning, they would always be really aggressive.
And I think in like all three games, we scored a goal, like within the first
20-30 seconds, so I liked how it kind of like set the tone in that regard.
So I think I would keep that... being aggressive and having that sort of
mindset. (P02, High, Static)
Towards the end of the third game of the last set, I didn’t feel like I was
94
improving at all. And this one, I think, I just kind of gave up. (P26,
High, Dynamic)
The findings discussed below will be organized to talk about (1) motivations
that align with highly influential teammates, (2) motivations that align with lowly
influential teammates, and (3) motivations that align with teaming influence being
relatively even between teammates. Ultimately, these findings demonstrate how the
intrinsic link between personal motivation and AI teammate teaming influence com-
plicates the formation of AI teammate preference, which makes it difficult to observe
through statistical measures.
4.2.1.1 A teaming influence balance that favors AI teammates is preferred
by humans who enjoy winning and prefer to learn by watching. While
highly influential AI teammates can lead to greater levels of performance in human
team members, personal motivations may be more critical than these performance
gains in dictating preference for teaming influence level. Specifically, participants
noted two key personal motivations that directly aligned with high levels of AI team-
mate teaming influence: (a) the desire to win; and (b) the desire to learn by watching.
For motivation (a), multiple participants were mostly supportive of having
highly influential AI teammates because they felt it best help them win. This desire
to win may come from the competitive nature some participants have with participant
P24 (Low, Static) echoing this sentiment: “yeah, I’m already hyper competitive.”
However, participants explicitly link their enjoyment of the activity with whether
or not they are winning, which ultimately motivates to prefer a highly influential
teammate. For instance, P12, P26, and P29 noted their preference for winning as it
is their actual purpose:
I’d probably take the one I just played with. Considering, you know, they
95
won two games and tied one. (P12, High, Static)
I’d rather win than really have like an influence on the team. (P26, High,
Static)
The whole point is to win... Yes, I’d rather take someone that will just
do that. (P29, High, Dynamic)
Importantly, highly performative human teammates may actually be the ex-
ception to this trend as they see themselves as solely capable of fulfilling their desire
to win. For instance, P18 echoed their preference for a lowly influential teammate
while also mentioning the following importance of winning:
Um, I think they’re almost connected. If I know I’m doing really poorly, I
just won’t be having a good time. But because that can a thing of either
you are hard on yourself or your teammates are hard on you. (P18, Low,
Static)
For motivation (b), learning was a common personal motivation participants
had, and many of them prioritized learning through watching. These participants
were heavily supportive of highly influential teammates as they could use their AI
teammate as an exemplar who was constantly demonstrating skill. For instance, P12
and P22 noted that they mostly wanted to watch their AI teammate in order to
improve:
I feel like that would help. Especially, since I was learning as I went, I
can be able to learn and see what they do and then try to react off that.
(P12, High, Dynamic
I think I’m still improving, and I think I’m learning some skills from the
dominant players, like how to jump or flip or whatever. (P22, High, Static)
96
The above two motivations show that there is a desire in human teammates
to have highly influential teammates, even if that desire results in them having lower
performance potential. However, the above also demonstrates that the reason for hav-
ing a highly influential teammate is non-consistent from person-to(Person, meaning
that other contexts may actually see specific motivations align with different levels of
teaming influence due to the design of the task and team.
4.2.1.2 A teaming influence balance that favors human teammates is pre-
ferred by humans who enjoy playing and prefer to learn by doing. Par-
ticipants who prefer lowly influential teammates often place a greater sense of value
on the actual experience and participation they have as teammates. Interestingly,
while performance was higher with participants that played with lowly influential
teammates, actual short-term performance was not a strong factor of consideration
when determining teammate preference. Rather, the following two motivations were
commonly seen as reasons for wanting a lowly influential teammate: (a) the desire to
have fun playing a game; and (b) the desire to learn by doing the task.
For motivation (a), a large number of participants felt that highly influential
teammates prevented them from actually playing the actual game. Ultimately, highly
influential teammates reminded participants of unpleasant “ball hogs” that made the
game itself enjoyable, thus reducing the perceptions humans had of these teammates.
Participants p04 and p15 are clear examples of this motivation as they felt the entire
purpose of the game was to have fun and not to win, and participant P06 even linked
this fun with the better performance they experienced:
[Asked about having a highly influential teammate] No, not for me... I’ve
grown up playing sports, and I don’t like a ball hog. And you’re on a
team for a reason. (P04, Low, Static
97
...you also just don’t want to feel like you’re kind of like a side character
and you’re kind of just watching somebody do all the work. So I probably
just pick the first one [low teaming influence] still. (P15, High, Dynamic)
As a note, it is important to note that even if a participant wants to have fun,
the actual context and task being completed are going to dictate whether or not they
want to have fun. Similar to those who are motivated by winning, this finding means
that wanting to have fun is not a universal trait across all tasks but rather dictated
by the enjoyment of that specific task. For instance, P16 pointed out that they would
want a lowly influential teammate in a task they played more often:
So it’s kind of like I’d rather it would just do the work for me, I guess. But,
if it had been another sport, maybe like softball, I would have been real
upset about it because I played softball in my life. (P16, Low, Dynamic)
For participants with motivation (b), learning was still really important, but
the actual method of learning centered around participation rather than observation.
Lowly influential teammates provided these participants with the room they needed
to actually perform, make mistakes, and learn in a hands-on manner. Participants
P08 and P20 echoed this sentiment in their interviews by explicitly mentioning the
motivation to learn and the need to do it in a hands-on manner;
I think when I was able to have more of a chance to play and interact
with the ball and actually try to do it, I was more motivated to try to do
well. But then as like, my teammate kept doing it and kind of taking the
ball away from me, I just was like, oh, I could lay back a little bit. (P08,
High, Dynamic
I definitely could see how I could learn by watching them but it was a lot
easier to learn when they weren’t really in control. (P20, Low, Static)
98
Additionally, P17 pointed out that while they may have been fine with high
teaming influence in the beginning, it became less and less welcome as the need for
hands-on learning increased alongside their improvement:
I think it was a little bit worse. In the end, it might have been because I
was getting better at it. So they didn’t need to be everywhere as much.
(P17, Low, Dynamic)
The above results demonstrate that hands-on experience is a priority for a
lot of humans working in human-AI teams. Whether that experience is used for
learning or just to have a more enjoyable experience, the availability of that hands-on
experience is critical to these people. However, highly influential teammates may
deprive humans of this desired experience, especially if that teaming influence level
is maintained for a long period of time.
4.2.1.3 An even balance in teaming influence is preferred by humans who
prioritize healthy teamwork. While a large portion of participants tended to
have a personal preference for high or low levels of teaming influence in their AI
teammates, some participants were actually more receptive towards having even levels
of teaming influence between both teammates. Generally, these participants were not
as much concerned with the performance of their team but with the balance of teaming
influence present. Both P01 and P19 mentioned their perceived importance of this
balance:
Sometimes you have to limit yourself to other people’s like skill levels.
(P01, High, Dynamic
I liked the second one the most, because it was almost balanced, but to
the point where I could rely on someone else. (P19, Low, Dynamic)
99
Participants who have these strong beliefs about teamwork can even use these
beliefs to override other personal motivations. For instance, winning may not be a
priority if a participant feels like a team is trying their best and working cohesively.
This sentiment was echoed by participant P16 (Low, Static): “I’m completely fine
losing, as long as we lose together.”
Importantly, this does not mean that humans want the amount of AI teammate
teaming influence to be exactly the same as their own. Rather, they simply want a
balance that allows responsibility to be shared in a way that is not one-sided. The
following from P19 clearly illustrates how having a better teammate is not bad as
long as that teammate is somewhat similar to the participant:
Um, I would want them above me, but not super expert because I wouldn’t
want to feel like I’m totally being carried or anything. But, I still would
like someone better than me so it’s almost like a cushion. (P19, Low,
Static)
The above subthemes demonstrate how the ideal level of teaming influence
that an AI teammate has is a highly personal factor. While the performance of a
team or individual ties into this preference, it is ultimately up to the motivations of
the individual in determining an ideal level of teaming influence. Unfortunately, this
may make it difficult to design AI teammates with an ideal level of teaming influence
as the prediction of these personal motivations will be unique from team to team, and
the above does not represent an exhaustive list of all the potential motivations that
exist. For instance, the following subthemes demonstrate how the dynamic nature
of AI teammate teaming influence or its application in the real-world will ultimately
expand the motives used to determine alignment and preference.
100
4.7.2.2 Personal preference for the dynamic nature of AI teammate team-
ing influence is also contingent on personal motive alignment.
The level of teaming influence an AI teammate has is not guaranteed to be
static as environmental or technological changes may ultimately change the role these
teammates play. In addition to determining the ideal level of AI teammate teaming
influence, the findings of this study also uncovered that the method of transition
between teaming influence levels is also tied to personal motivation. Specifically, the
motivation of adaptation was prevalent amongst most users was a prevalent concern
when determining their ideal method of teaming influence transition. Participant P05
provide a clear example of how this motivation to adapt was impacted by changes in
teaming influence level:
First game, compared to last game, difficulty adjusting... I thought it
was probably harder in the beginning, because like, they were kind of
hanging back a little bit more. They weren’t actually taking control, but
then when they were taking control the situation, it was easier for me to
understand. (P05, High, Dynamic)
While adaptation was a central motivation for participants’ preference, par-
ticipants’ preferred methods of adaptation differed, which ultimately differed in their
preference of how an AI teammate should transition between teaming influence levels.
Firstly, a considerable number of participants prioritize the importance of consistency
in adaption. These participants wanted quick transitions that while jarring, would
provide a greater amount of time with an AI teammate at a static teaming influence
level. In other words, these participants show a greater affinity for static AI team-
mates over highly dynamic teammates in environments where a change in teaming
influence is inevitable. For instance, when asked about whether they would prefer
101
quick or gradual changes in AI teammate teaming influence, P09 and P14 said the
following:
Probably the chunk down have seven games the same teammates, because
it’d be a lot easier to adapt to it. Yeah, I mean, you know, working with
the same I guess level of skill would be a lot easier to adapt to them just
continuously trying to adapt to a new one. (P09, Low, Dynamic
I wouldn’t want it to keep changing, because I would have to adjust every
time. So if I keep it consistent, then I can keep whatever my plan is
consistent, which I feel would work out better. (P14, Low, Dynamic)
On the other hand, some participants were much more receptive towards dy-
namic levels of AI teammate teaming influence; however, their preference was for that
teaming influence to be dynamic in a much more gradual manner. This preference
stems from the motivation to gradually adapt to AI teammates by making small,
but frequent adjustments. Since this method of transition would provide participants
with a less jarring and more iterative approach to adaptation, it was ultimately pre-
ferred by some participants in environments where a change in teaming influence is
inevitable. Participants p19 and p08 echoed this preference during their interviews:
I feel like the gradual descent would make me a little better and get more
comfortable because it’s kind of the aggressive teammate allowing me to
get used to it. And then slowly, I can move on to the more passive team-
mate so I can do more in the game and be better. (P19, Low, Dynamic I
would rather than gradual one, just because I guess it slowly gives me a
chance to get used to it instead of just going from like, zero to 100. (P08,
High, Dynamic)
102
While a high degree of unique motivations that impact the preference for
static or dynamic teaming influence was not found, the importance of adaptation to
humans shows a clear and personal preference. Humans who prioritize consistency
are going to want faster, more visible change, but humans who thrive on gradual
and slow(Pace change are going to want the same in their AI teammates. However,
the existence of these preferences in combination with the motivations found in the
previous theme paint a highly complex and personal picture of the preference for AI
teammate teaming influence. Not only are humans unique in their preference for its
level, but they are also uniquely different in their preference for how that teaming
influence should change over time.
4.7.2.3 Real-world preference of AI teammate teaming influence will be
further mediated by context and risk.
While the above two themes were heavily contextualized in the scope of the
games of Rocket League played by participants, interviews also discussed preference
for AI teammate teaming influence in other contexts. Similar to the above result, it
appears that transition into the real-world will also provide competing motivations
and preferences that will further complicate the ideal level of AI teammate teaming
influence. Specifically, participants were heavily motivated to minimize the risk AI
teammates posed to themselves and society. While situations with very little risk
and importance may be more welcoming to highly influential AI teammates, high-
risk contexts may be less willing to give teaming influence to AI teammates. The
following quotes from P12 and P27 provide examples of this motivation while also
revealing that this motivation to minimize risk may actually be highly unique to AI
teammates:
103
Probably high-risk jobs because even though the machine could be really
well built and everything. If the machine messes up, other than like a
human, that’d probably be, like a bigger deal than he [the human] messing
up. Yeah, it happens, just move on and learn from it. (P12, High, Static
I think it’d be the same. This is like a soccer game. I mean, it’s like
important, I guess, but like, not really that important. I’d be a little
more reserved when it came to really important stuff working with the
bot to a teammate. (P27, High, Static)
However, the challenge with risk being a complicating factor is that it may
actually be highly subjective to humans, meaning that while risk is a consistent
consideration there is a high degree of variance in this consideration, similar to the
consideration of adaptation. For instance, participants are more inclined to attribute
risk to the targeting of vulnerable populations, the potential consequences of a mis-
take, or even the impact it may have on personal comfort. Participants P12, P17,
and P14 all expressed their consideration of risk while providing different contexts
and factors that contribute to risk:
I think it is gonna affect their growth a little bit. Because I feel like
it’s very important to have that human connection when you’re younger.
(P12, High, Static
If it’s something like with a car where it’s putting you or other people
in danger, where if it crashed or malfunctioned, you could die, I think I
would want to have more control over that. (P17, Low, Static
I would rather my mom cooked me a meal or something like that. Not
that the machine’s not in control, but it’s easier to let’s just say you are
cooking potatoes and be like “hey can you put not as much butter or
104
something like that”. Whereas the machine I feel like it would be harder
to like, communicate something like that. (P14, Low, Dynamic)
Due to the perceived personal nature of AI teammate teaming influence identi-
fied above, participants were also quick to note that they felt the best path to take for
AI teammate teaming influence was that of choice. While this may not be ideal from
an AI design perspective as it increases the potential complexity of an AI teammate,
the preference in choice in the real-world was apparent. Participant P32 echoed that
they felt leaving the preference of AI teaming influence up to choice was ultimately
the best option:
I really feel like everybody’s gonna feel different, or there’s going to be
just two very separate sides, and it’s going to be for or against... Maybe
if we just kind of leave it up to choice, it would make people a lot happier
rather than just kind of putting them in everything. (P32, Low, Dynamic)
In addition to the motivations identified by previous themes, the concept of
risk explored by this theme demonstrates that the transition from AI teammates into
real-world settings will be further complicated by unique personal preferences driven
by the motivation to reduce risk. Moreover, the variance that exists in the concept
of risk further complicates this matter as it would make it increasingly difficult to
predict if a task is deemed risky by a human, in turn resulting in a preference for low
teaming influence in an AI teammate.
4.7.3 Summary Results
In summary, the results of this study are both promising and complex for the
future of AI teammates. In answering RQ1, it was observed that high levels of AI
105
teammate teaming influence ultimately lead to lower levels of human performance
(Table 4.3); however, these performance impacts do not seem to have a clear, quan-
titative connection to humans’ perception and preference their AI teammate’s level
of teaming influence. While perceptions heavily tied to teaming influence, such as
workload and perceived teaming influence, are impacted, perceptions more closely
related to preference and quality, such as perceived effectiveness, performance, and
trust, do not see a significant effect. However, an investigation of qualitative findings
revealed that these perceptions are more closely linked to personal motivations that
are unique to individual humans. The results of this paper were able to identify 5
key motivations in humans that would ultimately lead to them having a preference
for their AI teammate’s teaming influence level. While this answer to RQ1 is promis-
ing in that it shows humans can prefer highly influential teammates, the variance of
these individual motivations will ultimately make it difficult to design AI teammates
to align with every teammate’s personal motivation.
Similar to RQ2, whether or not AI teammate teaming influence was dynamic
did significantly impact human performance in the form of an interaction effect. Hu-
mans who work with AI teammates that dynamically decrease their teaming influ-
ence see a significant better level of improvement than other conditions (Figure 4.2d).
Moreover, given that the inverse was not seen in increasing levels of teaming influence
(ie. significant worsening of performance), it can be derived that these performance
increases are actually due to participants improving and not simply increases due to
lowing AI teammate teaming influence. This conclusion is further backed up by the
qualitative findings that show how high levels of teaming influence at the beginning
of a task are welcome as they can help set the tone and motivate humans. However,
the qualitative results also conclude that after setting this tone, AI teammates need
to be cautious in how they transition to a different level of teaming influence as hu-
106
mans may have highly personal preferences for how that transition should happen in
addition to the teaming influence level they are transitioning towards.
Additionally, the qualitative findings of this study help lend external validity to
the answers of RQ1 and RQ2 by showing that personal motivations and their linkage
to AI teammate teaming influence preference will continue in the real-world. However,
the motivations considered will become more complex due to the variance in context
and risk present in real-world scenarios. Thus, while the above results are highly
optimistic for the future welcoming of AI teammate teaming influence, researchers,
designers, and practitioners should tread carefully to ensure AI teammates are well
aligned with human motives, thus encouraging acceptance and usage.
107
4.8 Study 1a: Discussion
4.8.1 The Potential of Personal Competing Goals that Com-
plicate Teaming Influence in Human-AI Teams
The qualitative results of this study highlight how complex human-AI teaming
can become due to the existence of competing motivations within teams. Within
teaming, the complication of competing motivations is not new [353]. At their core,
teams consist of individuals who have personal motivations, all working together to
complete a team motivation or goal [455, 312]. Effective teaming is not the result of
ignoring these personal objectives for the sake of only focusing on team performance,
but rather balancing the completion of personal and team goals [409]. In fact, this
balance is so important that team leaders and managers often see the creation of this
balance as one of their most important responsibilities [90, 409]. While past research
has discussed how these personal motivations are still an important consideration for
leaders in human-AI teams [142], the results of this study show how these motivations
may actually be an important consideration for AI teammates themselves.
The results of this study demonstrate how the consideration of personal mo-
tives is an explicit consideration for designing AI teammates to teaming influence
teams. Whether humans want to watch and observe an AI teammate or simply par-
ticipate in a task for the purpose of enjoying it, an AI teammate must benefit this
desire if it is going to be compatible for humans. However, as also demonstrated by
the results of this study, prioritizing a team’s goal may unfortunately de-emphasize
specific personal motives humans are going to have. The result of this de-emphasis
is a conflict of motivation, where the personal motivation of the AI could be viewed
as the efficient prioritization of a team’s goal, which may not be the motive of hu-
108
man teammates. Unfortunately, this conflict may come as a simple and unavoidable
limitation to the programming of AI teammates. Although humans may be able to
iteratively balance personal and team motives when operating in a team, this would
be a greater challenge for AI teammates as they will lack a level of general intelli-
gence, especially in early cases of human-AI teaming [141, 341]. Specifically, in early
instances of human-AI teams, AI teammates will only be able to consider the goals
of the team, because those are the goals they were designed and trained to complete.
While this may make them a highly performative teammate from a raw performance
perspective, we know that raw performance is not the only component that makes an
effective and compatible team member [381, 59], even if they are an AI team member
[42]. Thus, based on the results of this study, the concept of human-compatability in
human-AI teaming will need to be updated to include the consideration of potentially
competing personal motivations to ensure AI teammate teaming influence does not
work counter to personal motivation.
Unfortunately, if ensuring AI teammates are human-centered is built on the
alignment of these motivations, then human-AI teaming has an uphill challenge in
front of it. Like most challenges in human-AI teaming, this challenge needs to be
tackled from both a human and computational perspective to be solved in an efficient
and human-centered manor. From the human-side, research work needs to prioritize
the communication of AI teammate motives while also teaching human teammates
that AI teammates may not be able to actually help their personal motives due to
design limitations. Although this may not be ideal for humans, having this knowledge
is key to tempering expectations and not assuming an AI teammate is always going to
help with their own personal motive [22]. From the computational side, work needs to
advance the generalizeability of AI teammate knowledge to be inclusive of the personal
motivations human teammates have. This does not mean that AI teammates should
109
be designed to only consider these motives, but rather that these motives need to
become a consideration component, just like any other teaming component, such as
trust [284], ethics [142], or even team cognition [391]. If research focuses on both of
these perspectives, a healthy and necessary middle ground can be achieved in early
implementations of human-AI teams where humans are willing to compromise on the
prioritization of their personal motivations by AI teammates.
4.8.2 The Importance of Healthy Competition in Human-AI
Teams
While personal motivation was critical to the long-term perceptions humans
formed for their AI teammates, both the quantitative and qualitative results of this
work show that high levels of AI teammate teaming influence on a task are most
welcome in the early stages of teaming. Not only were a large degree of participants
unopposed to AI teammates “setting the tone”, but this action ultimately led to per-
formance improvements in human teammates. In explaining this outcome, one could
look at the concept of competitiveness in teaming [94]. Within teams, a healthy level
of competition can exist where humans actively encourage each other to improve by
setting an example of high performance [208]. In fact, teams often find ways to pro-
mote this competitiveness as it not only helps performance but also other critical
teaming factors like knowledge sharing and adaptability [175]. Overall, competitive-
ness, when used correctly [212], can benefit teams in tangible ways. The results of
this study suggest that AI teammates would have the ability to also promote this
healthy competition in human-AI teams.
The ability for AI teammates to encourage improvement is not an unknown as
past work has shown that AI teammates that initiate conversation improve teaming
110
outcomes like team cognition [391]. Extending on this idea, both the quantitative re-
sults of this study show that the teaming influence AI teammates have on a task can
effectively goad humans into improving at the task level without harming long-term
perceptions. This finding might also have direct implications for training human-AI
teams. While it may not be practical for AI teammates to decrease their teaming
influence over time in a real-world task, humans may go into real-world tasks with a
higher level of performance if they are trained with an AI teammate that decreases
teaming influence, as suggested by the quantitative results of this study. This con-
cept could relate to perturbation theory of teams, which discusses how experience
in isolated training benefits real-world performance [160, 95]. AI teammates could
be designed to be more competitive and increase the efficiency of these perturbation
exercises, thus better preparing humans for real-world human-AI teaming.
However, the implementation of this competitiveness is not entirely intuitive.
While an initial assumption might posit that the competitiveness provided by AI
teammates should grow alongside human improvement, the actual results of this
work suggest that AI teammates should start highly competitive but back off and
give humans room to grow and improve (i.e. set the tone). If competitiveness is
not implemented in this way, a healthy competition becomes unhealthy, which can
be detrimental to teams [77]. The effects of this unhealthy competition are also
evident in this study as well, as AI teammates that increase in teaming influence
ultimately stagnate the performance improves of humans. Thus, both sides of this
competitiveness can be seen through the results of this study, with results indicating
how helpful healthy competition can be to human-AI teams.
However, work still needs to explore other ways AI teammates can “set the
tone,” and management/leadership research may point to a great starting place as
motivation and the encouragement of competition is critical to those roles [193, 302].
111
For example, human-AI teams could utilize collaborative training where humans ac-
tively train alongside AI teammates to improve motivation [200]. More robust and
specific communication strategies could also be designed that allow AI teammates to
directly motivate human teammates through the use of competitive language [484, 45].
Even the use of gamification methods, which are fantastic for AI design and educa-
tion [380, 300, 476], could also create a level of healthy competition and motivation in
teams [386]. This work can already be seen in human-AI teaming from a theoretical
viewpoint [142], but empirical explorations of this motivation is still needed. The
above are just a handful examples of how AI teammates can be actively designed
to better the motivation humans have in human-AI teams through the utilization
of healthy competition. The results of this study merit the further exploration of
these concepts with the goal of ensuring AI teammates are not demotivating to the
improvement of human teammates.
4.8.3 Design Recommendations
4.8.3.1 AI teammates should be highly performative when they first join
human-AI teams.
One of the most interesting interpretations created from this study’s findings
was the ability for highly influential AI teammates to both motivate and demotivate
individuals based on personal motivation. However, demotivation was more likely to
occur after repeated interaction. Moreover, the results of this study show that early
instance of high AI teammate teaming influence can benefit and encourage improve-
ment in human performance. Thus, having AI teammates be highly performative and
influential in the early stages of interaction will allow for potential motivation, but
scaling back their teaming influence during the middle stages of interaction will pre-
112
vent demotivation while also allowing growth. While the results of this study show
that a universal ideal level of AI teammate teaming influence or variability is not
possible, this compromise allows potential benefits from both high and low levels of
teaming influence to be obtained.
The results of this study and this design recommendation provide interesting
implications in light of previous design recommendations. Previous work has shown
that human-centered AI should not always be highly performative as it does not
promote human compatibility [42]. However, based on the results of this study, this
design recommendation posits that early stages of teaming would be more welcoming
to high levels of AI teammate teaming influence and performance as long as that
teaming influence is not a permanent fixture. Within teaming literature, this process
of “setting the tone” is not uncommon and has been shown to still be effective in light
of competing personal motives [6]. Thus, enabling AI teammates to set this tone can
ultimately benefit humans’ perceptions but also their ability to grow and improve
as teammates even if they have competing personal motives. However, as iterated
by this study and this design recommendation, setting the tone is more critical for
earlier stages of human-AI team interaction.
4.8.3.2 Human teammates should schedule performance updates for their
AI teammates.
While the early and middle stages of interaction can follow the design rec-
ommendation above, the results of this study show that long-term teaming influence
from AI teammates has to be decided based on the personal goals of humans. Thus,
long-term teaming needs a more individualistic way of designing AI teammate team-
ing influence. Specifically, this study found that humans preference difference for both
the level of teaming influence AI teammates have and how that level varies. Given
113
this finding, human teammates should be able to select when the teaming influence of
AI teammates might change by personally scheduling their update cycles. Doing so
would not only provide human teammates with a more preferred experience, but that
preferred experience would lead to greater perception of the AI teammate based on
the results of this study. Importantly, security critical updates would not follow this
recommendation as it is more focused on updates that impact AI teammate teaming
influence.
This design recommendation provides a potentially different recommendation
for updating AI teammates than prior work, which has focused on the importance of
frequent and minimal updates to AI systems [22]. Rather, the results of this study
indicate that some humans will better perceive AI teammates if they were to have
larger, more visible updates as it makes it easier for them to adapt. Thus, rather
than having a global rule for using more unnoticeable updates for AI teammates, this
study recommends allowing teams to collaboratively determine their team-specific
rule for updating and advancing AI teammates. This process would be somewhat
similar to performance evaluations in human-human teams, which allow humans to
synthesize their faults, successes, and areas of needed improvement into actionable
improvement plans [134, 168]. With this design recommendation, AI teammates
would effectively be given their own personal performance evaluation cycle that allows
teams to evaluate, update, and understand their AI teammate.
4.8.3.3 Human’s personal motives should be a minor consideration of AI
teammates.
As repeatedly mentioned in this study, personal motivation is one of the most
critical factors when determining the acceptance of AI teammate teaming influence.
However, AI teammates are not designed with personal motivations in mind but
114
rather team motivations, and this will be especially true in the early stages of human-
AI teaming when AI teammates lack general intelligence. Thus, the performance feed-
back AI teammates receive from their human teammates should also include feedback
on if AI teammate teaming influence conflicts with any personal motivations. In doing
so, AI teammates will gain team specific knowledge and become better aligned with
human teammates while still learning from general task feedback. Unfortunately, this
recommendation will not perfectly align AI teammates with personal motivations as
there can be many personal motivations, and AI teammates still have to ensure they
are completing their assigned tasks correctly. However, this recommendation would
still improve overall alignment and in turn AI teammate acceptance.
While it would be nearly impossible to design AI teammates with personal mo-
tivation in mind before those teammates are assigned to human-AI teams, this ad-hoc
consideration of personal motivations would provide a good compromise. Importantly,
even in human-human teams, the alignment of personal motivations is difficult and
not instantaneous or guaranteed with teams constantly dealing with conflict between
personal and group motivations [194]. However, teams are still able to coordinate and
perform even with some degrees of misalignment [484], and the results of this study
show that this is still true for human-AI teams. However, even if humans are still able
to work with misaligned AI teammates, this design recommendation lessens misalign-
ment to ensure preference and acceptance, which will ultimately benefit long-term
teaming.
4.8.4 Limitations and Future Work
The most apparent limitation to this work is the population that participated
in the experiment that was conducted. While the utilization of a younger audience is
115
a limitation, this population also poses a significant relevance to human-AI teaming
as they are a large component of upcoming workforces, which are likely to experience
the integration of AI systems. Thus, the results taken from this demographic are
highly relevant to future workforces as this demographic is a major component of
the future workforce. However, work can still examine other demographics, such as
older individuals, to identify how seniority or even age in general may impact the
perception of AI teammate teaming influence. Secondly, this study is limited by its
operationalization of teaming influence. The teaming influence AI systems have on a
task will not only be increased by its design as a teammate but also by other factors
that should be explored by research. For instance, the number of AI teammates
present on a team would impact how much teaming influence AI teammates have on
a task in general, and this type of teaming influence on a task may look different
than what was examined in this study. Thus, this study should not be the final
exploration of task-level teaming influence in human-AI teams but rather provide a
foundation for understanding it. Finally, the context examined in this study may
provide a limitation as gaming platforms are not always indicative of the real-world.
However, personal motivations present in this study will most likely become more
impactful in real-world teams where humans are more invested in their teams and
task. Thus, while these results apply outside of the utilized context, future work
should still examine the role of personal motivations in various human-AI teaming
contexts.
116
4.9 Study 1b: Overview and Research Questions
One of the most interesting findings of Study 1a was how uniquely personal
peoples’ perceptions and reactions to teaming influence, and in turn social influence,
were. These findings inspired a closer examination of this phenomenon and cre-
ated Study 1b, which critically examines the process of teaming influence becoming
social influence in human teammates. This social influence ultimately creates last-
ing changes in human behaviors and perceptions that impact their interactions in a
human-AI team. While Study 1a examined the impact of teaming influence, Study 1b
examines the actual process of teaming influence becoming social influence through
human teammates’ behaviors and perceptions. The following research questions ex-
plore this social influence process by answering dissertation RQ1.
RQ1.1 What are the ways AI teammate teaming influence becomes social influence
that changes humans’ behaviors?
RQ1.2 What perceptual changes arise in humans as AI teammate teaming influence
becomes social influence?
4.10 Study 1b: Qualitative Methods
While the experimental context and the interview design for Study 1 were
already elaborated on, the purely qualitative nature of Study 2b means that different
analysis methods were used, and these methods will be discussed. The interviews
were transcribed within a few days upon completion of each interview by the primary
researcher. During transcription, relevant prosodic information (e.g., hesitation) was
marked, but speech disfluencies (e.g. fillers, stutters) were removed from the ex-
cerpts for ease of reading. The transcripts were manually coded using spreadsheets,
117
highlighters, and affinity diagramming. During open coding, additional researchers
exhausted the data and developed an initial set of codes. Researchers took care to
explore the boundaries of the codes by actively looking for discrepant data [277].
Through iterative coding, initial codes were merged, broken down, or modified by
identification of alternative interpretations and cases that did not fit [277]. A total
of 12 codes and 368 quotes were finalized during the open coding stage. Three re-
searchers arranged part of the codes around the research questions. The researchers
then further iterated by grouping similar codes, examining quotes in their context,
and uncovering the connections underlying the themes to piece out a framework.
118
4.11 Study 1b: Results
The first half of this analysis details behavioral changes that are character-
istic of AI teammate social influence that stems from teaming influence. During
interviews, participants heavily emphasized that the majority of social influence they
experienced impacted their behaviors with AI teammate teaming influence often lead-
ing to behavioral social influence and subsequent adaptation. However, participants
were still aware of how teaming influence became a perceptual social influence and
impacted their perception, but the creation of behavioral social influence was much
more prevalent. The second half of this results section discusses perceptual changes
that can occur during this process. Thus, the understanding of how teaming in-
fluence becomes social influence is handled from both a behavioral and perceptual
perspective.
4.11.1 RQ1.1: Humans Proactively Interpret AI Teammate
Teaming Influence as Social Influence and Adapt Their
Behaviors Accordingly.
When AI teammate teaming influence is present in a human-AI team, it’s
important to understand how humans change their behaviors to accommodate that
teaming influence, which is social influence. Fortunately, the results of this study
are highly promising in that humans naturally and rapidly allow AI teammate team-
ing influence to become social influence. However, multiple conditions exist within
this process that enables humans to adapt rapidly. Unfortunately, this adaptation
can sometimes become too large and result in humans forfeiting their own teaming
influence in the process. These two types of adaptation along with the underlying
119
processes that contribute to them are discussed below.
4.11.1.1 If Conditions are Met, Humans Iteratively Adapt Around the
Teaming Influence an AI Teammate Exerts.
In regard to the behavioral changes that stem from AI teammate teaming
influence, proactively adapting around AI teammate teaming influence was the most
prevalent. This is fantastic news for human-AI teams as it shows that the presence
of an AI teammate’s teaming influence on a task and the social influence that stems
from it is in itself not a deterrent, and humans will do what they naturally do: adapt.
Moreover, humans do not need to be told to adapt as they will proactively do it if a
few conditions are met. Thus, human-AI teams will naturally see that AI teammate
teaming influence becomes social influence if specific conditions are met. This social
influence presents itself as a form of adaptation and is discussed below; afterward,
the conditions that need to be present to permit this social influence are discussed.
In regards to general adaptation, participants often went through a process
of figuring out the AI teammate’s behavioral pattern and came to realize the need
to adapt to the AI teammate’s behavior and different levels of teaming influence.
Participants even saw this adaptation process as being a means for them to gain
more teaming influence on their tasks and teammates. The following quotes echo
these sentiments:
I felt that I had to kind of take a step back and realize what they were
doing. And I had to go based on them rather than the other way around.
(P05, Female, 18, Caucasian)
After playing with the artificial intelligence, I was realizing like, okay, this
is what they were gonna do. So I need to kind of base my actions off that.
120
(P07, Male, 18, Caucasian)
So I would say the fact that I was able to adjust better led to me having
more influence on the game. (P18, Male, 21, Caucasian)
I was able to adjust better which led to me having more influence on the
game. (P28, Male, 22, Caucasian)
While the observation of AI teammates is not instantaneous, interviews brought
to light that the actual adaptation process itself is rapid. Essentially, once humans
decide they want to adapt to an AI teammate and its teaming influence, that adapta-
tion becomes a priority that is undertaken quickly. In other words, teaming influence
from an AI teammate rapidly becomes a social influence on behavior. It appears
that undertaking such adaptation often feels natural for humans, and can be done
fairly quickly and dynamically. The following quotes are examples of how participants
express the rapid pace they were able to adapt at:
I usually figured out the bots pretty quick...I would be able to adjust part
of the way through the game. (P28, Male, 22, Caucasian)
I feel like if you were to plan before, game plans change according to the
people and it’s easier for me on the fly to be able to adapt to them. (P23,
Female, 18, Caucasian)
Moreover, humans are often willing to repeatedly adapt to AI teammates until
they feel that they are working together as a team. Many participants explained
their actual method of adaption was to promote and benefit the actions of their AI
teammates. This ultimately led humans to adapt toward AI teammates rather than
hoping AI teammates adapted toward them. This process ultimately allowed a more
AI teammate-centered adaptation process, as illustrated by the following quotes:
121
So definitely, with each new game, I definitely had to change how I played
based on what the teammate was doing. (P02, Male, 18, Caucasian)
My goal would be to try and enable a teammate who hopefully is in a
good position to receive a pass, try and put the ball in a position that
would be out of reach for the defenders... And I’ll take shots if I need to, I
can shoot but, it’s really just let the other teammate play how they want
fly, and try and work around. (P18, Male, 21, Caucasian)
This process of iterative adaptation shows that humans are more than willing
to meet AI teammates half-way when it comes to coordination. While it may be
a novel effort to design AI to coordinate around humans, these findings show that
humans themselves are perfectly willing to continue to coordinate until they achieve
good teamwork. Accounting for this desire to coordinate might be difficult but it
should be facilitated, as the above quotes show that humans are naturally dynamic
in their behavior, especially when experiencing AI teammate teaming influence.
Importantly, humans are even willing to adapt even if it is not optimal for
them. In other words, humans are willing to make room and adapt to AI teammates
even if that adaptation ultimately harms their individual performance. Thus, adap-
tation could ultimately be harmful overall if the adaptation humans undergo worsens
their performance too much. The following quote echoes this sentiment:
It would probably just come down to how far down I got before I was able
to adjust and how well I was able to recover after I adjusted... I would
say that the adjusting process hurt my ability to play the game effectively.
(P28, Male, 22, Caucasian)
Humans who have the desire to garner more teaming influence in their human-
122
AI team will quickly observe and adapt to AI teammates. Even if this adaptation leads
to short-term drops in individual performance, humans still recognize the benefit of
that adaption to long-term teaming as well as the potential teaming influence gains
from doing so. Thus, it seems that human adaptation and thus social influence is
almost a given within a human-AI team where even if you were to have an adaptive
AI teammate humans will still work to meet them in the middle. However, multiple
factors contributed to the human’s motivation to adapt to AI teammates.
While the understanding that humans will adapt to AI teammate teaming in-
fluence is important, gaining a greater understanding of the factors and processes that
contribute to the adaptation process is also critical. Specifically, three prerequisite
conditions with specific consideration factors were identified as needing to happen in
order for adaptation to begin. The following lists these three conditions, and they
serve as the subthemes and following subsections for this results section:
1. Humans need a comfortable environment to adapt, which requires a semblance
of control.
2. Humans need to justify their adaptation to an AI teammate, and they either
use the limitations of AI or the skill level of AI to justify that adaptation.
3. Humans need to gather knowledge about their AI teammate before adapting.
Condition 1 - Controllable Environment: Humans will not allow AI team-
mate teaming influence to become social influence unless they have and
maintain a semblance of control. One of the most prevalent factors humans
considered with regard to AI teaming influence and their adaptation around it was a
sense of control within their human-AI team. Having a semblance of control is a foun-
123
dational necessity when creating environments that humans feel comfortable adapting
in. The following quotes from p07 defined social influence in terms of having control,
and p29 explicitly linked their personal teaming influence with their perception of
control:
I think influence pretty much means the ability to have some control over
what’s going to happen. (P07, Male, 18, Caucasian)
I mean, at the end of the day, I know I have the most influence over the
game itself because I can obviously turn off the console. (P29, Male, 18,
Caucasian)
Importantly, humans associate control and teaming influence with each other
while still viewing the two concepts as distinctive. For instance, they are fine with
imbalances in teaming influence, but losing a sense of control is not acceptable. This
sense of control does not need to be highly complicated either; rather, it can be
enabled as a basic on/off switch for the AI teammate. Even being allowed to stop
working with the AI teammate could be a type of this control:
But the human has influences they can start the game. Stop the game,
quit the game. (P29, Male, 18, Caucasian)
Humans’ need to have an overall level of control over their AI teammate boils
down to the fear that things might go wrong with AI systems. They believed that
having that last semblance of control over the system can prevent things from going
south:
But I don’t know how completely I would trust it. Because then again, I
also like to have control over it. So maybe like half and half, if I see that
124
it’s not really doing what it’s supposed to, I’d like to have some control
over it. (P01, Female, 19, Latino or Hispanic)
Having a sense of control is equivalent to a safety blanket that allows adapta-
tion to occur without the risk of complete failure. Once humans feel they have a way
out (e.g. switching off or quitting the game) they can feel comfortable adapting their
teaming influence to the AI teammate within that blanket:
I’d be pretty comfortable having out of my house, as long as I have the
ability to shut it off and turn it on whenever I want. As long as I’m in con-
trol, I would feel pretty comfortable with it. (P14, Male, 18, Caucasian)
Participants’ need to have a semblance of control was also explicitly noted
as extending to real-world applications of AI. One such example is self-driving cars,
mentioned by many participants. In these discussions, participants mentioned the
importance of having that final control in the form of a steering wheel:
Yeah, I think definitely like getting in a car that can automatically park
themselves. I would be much more comfortable with that versus getting
in a car and having no ability to control anything. (P17, Female, 18,
Caucasian
for me personally, I would like to take control back of the car, and I would
like to do more. (P05, Female, 18, Caucasian)
I feel like in the instance of a self driving car or something like that. I
would completely trust it. Obviously, they leave you know it up for that’s
why you have a steering wheel. Because sometimes it does need human
input. (P09, Male, 19, Caucasian)
125
The above results show one of the most critical findings of this research, which
is that for teaming influence to become social influence, which happens in the form of
adaptation, humans need a sense of control over their AI teammates. If they feel that
adaption will ultimately result in them feeling out of control of their teammate or
their team, then that adaptation cannot occur, which is shown by participant p01’s
quote. However, this does not mean that AI teammates cannot be more influential
than human teammates. Rather, humans are comfortable with high levels of teaming
influence if they feel they can stop it at any time. Once this comfortable and con-
trollable environment is created, humans can then begin justifying their adaptation
to AI teammates.
4.11.1.2 Condition 2a - Technological Justification: Humans justify their
adaptation to AI teammates by acknowledging their perceptions
of the AI teammates’ limitations.
Interestingly, participants justified their adaptation to teaming influence through
their perceptions of the general limitations of AI teammates and their capabilities to
adapt. Many participants alluded that adaptation is a uniquely human characteristic,
making it the human teammate’s responsibility to adapt to the AI teammate. For
instance, humans can achieve higher shared cognition with other humans through
complicated processes of observation and coordination that can be too subtle to pro-
gram but feels natural for humans. The following quote mentions this natural feeling
of adaptation:
It feels a lot more natural and everything working with a human. Because
you can communicate the ideas a lot better just through subtle things that
you do. (P07, Male, 18, Caucasian)
126
Additionally, humans are perceived as having the ability to have awareness of
the AI teammates’ situation and behaviors and adjust their behaviors accordingly.
However, that is not the case with AI teammates. p05 believed that AI does not
yet have the ability to observe and process the human teammate’s situational and
behavioral information to its advantage the way humans do:
Because I can read the situation of what they’re doing. But then they
can’t really understand what I’m doing and how that would affect what
they should do. But I can understand that what they’re doing. I can kind
of work around it better than they can. (P05, Female, 18, Caucasian)
Humans also have the impression that AI systems are a type of machine. This
inherent impression of AI’s machine nature leads to the perception that AI is designed
to do repeated and simple tasks, but not tasks that need adaptability. Moreover,
the consistency of AI teammates is not actually seen as a bad attribute. In fact,
participants felt adaptation was a core trait of humans while consistency was a core
trait of AI teammates. The following quotes are examples of these concepts:
Machines seem to tend to do the exact same task over and over again.
They’re programmed to do one thing. I feel like a human could adapt to
changes in their environment. So I feel like I would trust a person. (P22,
Female, 18, Caucasian) I think the person actually make it hard, because
machines are pretty consistent. Humans aren’t as consistent as machines
are. (P09, Male, 19, Caucasian)
I think since humans are so inconsistent in their decision making, and
they’re not always making rational decisions, I assume most AI would
tend to make, because I mean, they’re going to be trained for it. (P18,
127
Male, 21, Caucasian)
The above quotes illustrate how humans see adaptation as a uniquely human attribute
that does not even need to be completed by AI teammates. While this is great news for
human-AI teams, as humans will eagerly take the role of the adaptor, it does provide
a complication towards existing work that may prioritize the ability of AI teammates
to adapt. This result plays a key component in the discussion of this work as the
research surrounding AI teammates may need to pivot to better accommodate actions
humans perceive as “human”.
Moreover, other limitations in AI can also be seen as positive attributes in
the right situation, such as emotional limitations. Unlike humans whose performance
might be affected by mood, AI is immune to mood changes. Below, p28 compared
how his own performance was significantly impacted by his feeling frustrated when
his actions were disrupted by his AI teammate, and compared that to how the AI
would not be affected had he made similar disruptions to it. p05 made a similar point
regarding the benefit of AI having no emotion leading to greater consistency:
Bots don’t get frustrated. (P28, Male, 22, Caucasian)
I would expect more consistency from the machine because they don’t
process it (emotion) as well. (P05, Female, 18, Caucasian)
While this theme is about the limitations AI teammates are perceived to have,
the actual cause of teaming influence becoming social influence is that humans believe
that they as humans need to adapt to this teaming influence. Specifically, the above
quotes illustrate that adaptation is a role best suited for humans while consistency
is best suited for AI teammates. Importantly, this is already the goal of human-AI
teams, a domain in which researchers want to leverage the strengths of humans and
128
the strengths of AI teammates to make a more cohesive team. The above results
show that adaptation may in fact be a strength of humans in that humans want to
adapt to other teammates.
Condition 2b - Teaming Justification: Comparative skill levels are also
a justification for adaptation. In addition to AI’s limitations being used as a
justification for teaming influence becoming social influence, the comparative skill of
AI teammates also motivated adaptation. When a healthy skill gap exists within
human-AI teams, humans can have a fairly healthy level of adaptation with their
AI teammates. As discussed before, humans are fine with being “carried” by an
AI teammate. Part of this sentiment is a result of the comparison of their skills.
Participants were willing to let more skilled AI teammates be highly influential and
further justified their adaptation through comparative skill levels, as p32 noted:
I did feel like I was the one that needed to adapt just because I had never
played the game before. (P32, Female, 18, Caucasian)
This is a highly interesting finding as adaptation is in itself a skill, meaning it
requires effort and a degree of thought to do effectively. Furthermore, it seems that
participants equate adapting to actively assisting AI teammates. Some justified their
adaptation around their AI teammates by arguing that if they had better skills than
the AI, they wouldn’t mind the AI teammate adapting to them:
It’s just like the experience I have with video games. If I was more experi-
enced in this game, and I had played it multiple times before, I would feel
more comfortable with them adapting to me. But just because I’m really
inexperienced, and don’t know much about the game. (P31, Female, 18,
Caucasian)
129
The notion of comparative skill is not just in reference to overall game skill
but also considers individual teammates’ different expertise. Teammates who are less
skillful at skill “A” should adapt to those who are more skillful at skill “A”. Moreover,
this may differ as some teammates may focus more on making sure no one is doing
a skill that they are horrible at, while others may focus on ensuring each person is
doing the skill they are best at. Either way, this relationship could also be dynamic
as the skills humans have are dynamic and grow over time. The following quote by
p30 echoes this sentiment:
it would change a lot depending on what their skills are. So for example, I
was better at scoring than another teammate was, then I would probably
want to focus more on scoring. whichever one they’re better at adjust
accordingly to who should do what. (P30, Male, 20, Caucasian)
However, as mentioned before, if a gap between general skill or role-specific
skill becomes too large, it often leads to perceptions of frustration and annoyance,
which in turn leads humans to concede to an AI teammate. When it is too great
of a disparity in skills, human adaptation around their AI teammates falls to the
extreme of giving too far into the social influence and reverting to compliance. Many
participants admitted that they gave up trying after seeing that their AI teammates
were much better than them:
Just because they’re so good, I kinda like gave up. (P21, Female, 18,
Caucasian)
Because they were obviously a lot better than me, I let them do a lot of
the work. (P27, Female, 18, Caucasian)
The above results reiterate a common finding in human-AI teaming that in-
creasing AI teammate skill does not automatically increase human teammate skill.
130
When considered in light of social influence, we see that humans intrinsically link
performance and teaming influence, and that linkage can encourage them to adapt.
However, if there is a perceived disconnect, either due to a lack of AI teammate
performance or an insurmountable gap, then this linkage is what harms human-AI
teams. Thus, in addition to the limitations of AI teammates, the actual capabilities
and utility of these teammates is similarly important.
Condition 3 - Knowledge: Humans wait until they have knowledge of
their AI teammate before they begin adapting to them. Once humans feel
comfortable adapting after gaining a sense of control and justifying their adaption,
the actual adaptation, and social influence process can begin. The first step in this
process involves humans waiting, observing, and learning from their AI teammates.
Given that humans prioritized and took responsibility for adapting and coordinating
when working with an AI teammate, the role of knowledge and experience ultimately
became one of the most integral considerations they made.
Many participants mentioned how they carried out their actions based on what
they had observed the AI doing. As the following two quotes illustrate,
And then once you see what they do, it’s easier to figure out what you
should be doing. (P17, Female, 18, Caucasian)
It wasn’t hard. It was just observation. It’s gonna help us, I should just
do it. (P16, Female, 18, African-American)
More than just being an important factor, knowledge, and experience is essen-
tially a prerequisite to most humans to even start the adaptation and coordination
process. One common strategy participants took was to pause and wait until they
could figure out the behavioral pattern of their AI teammate to determine their own
actions:
131
I had a more relaxed approach to the ball as much, i just kind of waited
to see what my partner was doing. (P33, Female, 21, Latino or Hispanic)
I would wait and see if they were like trying to score more. (P27, Female,
18, Caucasian)
The above illustrates a potential roadblock towards accepting and adapting to
AI teammate teaming influence in that teaming processes may essentially be on hold
while humans learn about their new AI teammates. While humans are extremely
quick to adapt, the identification of how and why they need to adapt may not be
as rapid. Thus, finding a way to gain this experience and knowledge in a first-hand
manner without interrupting existing team processes would be integral to the success
of a human-AI team.
In regard to long-term teaming, repeated experience and observation of the
positive performance of the AI teammate seemed to have revised humans’ negative
prior knowledge about AI and lowered barriers they may have put up in front of AI
teammates, which in turn allowed them to become open to adaptation and collabora-
tion with the AI teammates. This finding is highly optimistic as this means humans
are willing to give AI teammates a second chance if they are still willing to adapt to
them. For example,
So I guess I had that expectation going in. Then after the first two games,
I kind of realized that I could use this to my advantage. It’s much better
than I thought. So that’s why I feel like I lost most of the influence in third
game, because then I started learning, almost like the AI take control and
pass it to me, rather than me just trying to take complete control. (P14,
Male, 18, Caucasian)
I feel like towards the end, I definitely was getting a lot more in the groove
132
and everything. Once I realized that, I think it was a little bit of growth
on my own part where I stopped chasing the ball the entire time. And I
kind of relied on that teammate a little bit more watched what they were
doing, and tried to figure out how exactly I could interact with the game
that was going on. (P07, Male, 18, Caucasian)
The above results further contextualize what it means for humans to “rapidly”
adapt to AI teammates. Although actual adaptation is rapid, it is because humans
prioritize gaining a robust understanding of their AI teammates before attempting
to adapt. For instance, multiple participants signaled their need to wait to gain this
knowledge to ensure they were adapting accurately. However, this finding provides
a double-edged sword where the creation of this understanding ultimately leads to
highly capable adaptation while also slowing down team processes.
The above themes and conditions present the finding that human teammates
will proactively interpret teaming influence as behavioral social influence, which
presents itself in the form of adaptation. However, the conditions that enable this
interpretation and adaptation are numerous and varied. Once humans have a sem-
blance of control, they can begin justifying their adaptation either by their perceived
limitations of AI teammates or potential skill gaps that exist in AI teammates. Once
this justification is created, humans simply wait, observe, and learn with the goal
of planning out their adaptation. After these steps are completed, humans begin to
iteratively adapt to their AI teammates with the goal of accomplishing team goals
and increasing their teaming influence as a teammate. However, adaptation is not
always done in a healthy way as humans can sometimes demotivate and comply with
AI social influence, which is the final finding discussed in the below subsection.
133
4.11.1.3 Humans will Forfeit Their Own Teaming Influence if the Team-
ing Influence of an AI Teammate is Disruptive.
While the above theme demonstrates how humans naturally interpret teaming
influence as social influence through healthy and proactive adaptation, an extreme
type of social influence can occur if teaming influence is disruptive. Some participants
fully conceded and surrendered to AI teaming influence by giving up on the task alto-
gether. The potential for humans to simply give up on a task when working with an
AI teammate is worrisome, as human-AI teaming is only going to work if both human
and AI teammates are active participants. The below results demonstrate how some
humans forgo healthy adaptation and opt to fully comply with AI teaming influence
to the point where they no longer look to leverage their own teaming influence. For
example:
I think they were influential, that I kind of just stop trying to get the
goals much whenever they... Yeah, they’re better than me. (P27, Female,
18, Caucasian)
Unfortunately, just as they are quick to adapt, humans may be fairly quick to
concede to AI teaming influence. This suggests that early imbalance in influence may
have long term impacts on human-AI team dynamics. For instance, p27 reflected
that they have only had one episode of observing the AI’s teaming influence and that
singular experience has terminated their willingness to engage in further efforts:
One time I tried to hit the ball and then the AI came and knocked me
out of the way so that was the first time I was like, I don’t really need to
do anything anymore. (P27, Female, 18, Caucasian)
134
Based on these interviews, it seems that the haste to adapt and the haste to
concede are almost the same. Essentially, what enables humans to quickly adapt also
enables them to quickly give up on a task. While this haste may be beneficial from
an adaption standpoint, it is worrisome from a teaming standpoint, as by the time
signs of concession appear it may be too late to course correct and improve human
motivation.
Importantly, this conceding often only happens when it comes at no cost to
the human teammate. Conceding to AI teaming influence in this context is different
from giving up full control. For example:
Yeah, it was just like, if I just put my hands up, I’m sure they could have
done perfectly fine without me. (P20, Male, 20, Latino or Hispanic)
If it can do the whole thing, then everything it should do the whole thing,
because it’s at no cost to me, really, for it to do the whole thing. (P09,
Male, 19, Caucasian)
Generally, what causes this concession is the teaming influence of AI team-
mates causing disruptions in humans’ own teaming influence. For instance, if a human
is trying to move a shared resource or go for a goal and an AI teammate prevents
that, then humans are quick to be discouraged and stop trying to have any teaming
influence at all. This sentiment is exemplified by the following quotes:
My teammate kept doing it, kind of taking the ball away from me. (P08)
The AI was just whacking the ball out of my possession, which I didn’t
really care because I was going the wrong direction. But I think my
feelings didn’t change towards the AI. And my actions didn’t really change
either. (P26, Female, 18, Caucasian)
It ran into me a couple of times, but it wasn’t anything irritating like the
135
first one. I was just too frustrated to actually properly adjust with the
first one. (P04, Female, 19, Black, Asian, Caucasian, Pacific Islander)
This theme provides an interesting point on how social influence can manifest
as unhealthy adaptation by humans. However, avoiding disruption provides a means
of avoiding this unhealthy manifestation of social influence. Unfortunately, early
imbalances in teaming influence that lead to disruption may be difficult to predict as
an imbalance is reliant on the comparative teaming influence of both AI and human
teammates. For instance, an imbalance that is seen as demotivating to one human
may not be to another human due to they themselves having different levels of teaming
influence. Thus, understanding these potential imbalances and the factors that lead
to this perceived imbalance before they exist would be the most effective method of
encouraging humans to not concede.
4.11.2 RQ1.2: Humans Vary in How Their Perceptions Change
When Experiencing AI Teammate Teaming Influence
Based on Teaming Factors They Find Important.
While interviews mostly demonstrated that social influence in AI teammates
most commonly manifests as behavioral social influence, teaming influence was still
able to create social influence on humans’ perceptions. However, this perceptual so-
cial influence often comes as a result of behavioral social influence and the adaptation
process, making this social influence more reactive than the above theme. Moreover,
perception can vary from person to person based on the teaming factors they find
important and whether or not an AI teammate’s teaming influence benefits those
factors during the adaptation process. Three common perceptual changes regarding
AI teammate teaming influence emerged from our data: 1) the perceptions of the AI
136
teammate’s teaming influence leading to a sense of team synergy; 2) the perception
of the AI teammate’s teaming influence as being helpful; and 3) the perception of
the AI teammate’s teaming influence being frustrating. The following themes dis-
cuss the perceptions created by social influence and the factors that determine these
perceptions.
4.11.2.1 When AI Teammate Teaming Influence Increases Playstyle Syn-
ergy, Humans Build Greater Perceptions of Team Synergy.
The first theme centers around how a balance in teaming influence between
human and AI teammates during the adaptation process was often perceived as a type
of healthy teamwork. This teamwork perception hinged around teammates having
healthy chemistry or give and take. Participants p19 and p32 repeatedly stressed
the importance of this teamwork perception, with the following quotes providing
examples:
(When) it’s working on a team, you should carry equal weight, (it) doesn’t
matter how much better or worse you are. Obviously, you can be more
assistive if you’re a lot better. But it’s the concept of working on a team
for me that you feel like everyone should carry their own weight as much
as they can. (P09, Male, 19, Caucasian)
Definitely. Because the overall performance was better. And it led to
more. I don’t wanna say communication, but teamwork. (P32, Female,
18, Caucasian)
However, perceptions of teaming synergy may not be as strong when there
is an imbalance in teaming influence. Ultimately, this imbalance makes perceptions
of teaming weaker as AI teammates are portrayed as more individualistic. If one
137
member has too much teaming influence, it can feel as if the team is not operating
as strongly as it could be. The following quotes illustrate the importance of having a
balance:
He (the AI)’s more like the soccer player who does things on his own.
(P01, Female, 19, Latino or Hispanic)
So it wasn’t really like we were teammates. We were just both trying to
get the ball into the same goal. (P26, Female, 18, Caucasian)
Nevertheless, this does not mean that human-AI teams cannot have imbalances
in skill and teaming influence in order to be healthy as a healthy balance is not always
about splitting responsibility 50/50 but rather splitting responsibility based on ability.
As p07 (Male, 18, Caucasian) expressed, being the one carrying the team can be
burdensome for a human teammate and he would rather have the AI perform that
job because they are able to. A considerable number of participants also expressed
positive feelings about being carried:
I would not want to have to carry the entire time. That’s just a lot of
pressure to have (P07, Male, 18, Caucasian)
Yeah, the (AI) teammates have been carrying the team. They’re real
good. (P24, Female, 18, Caucasian)
Ultimately, what actually dictates this balance is not a numerical comparison
but rather a stylistic comparison. If the teaming influence exerted by an AI teammate
synergizes with the teaming influence exerted by a human then it creates a perception
of team cohesion. For instance, players that prefer playing defensive roles have a
greater level of team perception when AI teammate teaming influence is targeted
towards offensive responsibilities. Examples of this sentiment are as follows:
138
the other two we did decent in because I started to play more of a support
role as opposed to an attack role. Yeah, and that helped out better there
(P30, Male, 20, Caucasian)
I understood that my teammate was the scorer. Yeah, so it was me had
to disrupt the field (P29, Male, 18, Caucasian)
I think I learned what they did, whether they were like really defending
or they’re really trying to score it, and then I would do the opposite of
what they did. (P27, Female, 18, Caucasian)
The fact that human teammates are not opposed to being carried by their
AI teammates is fortunate as it shows promise for the acceptance of AI teammates.
However, it is clear that AI teammates have to meet teaming expectations to jus-
tify carrying their human teammates. Additionally, carrying a teammate is heavily
oriented towards filling a gap and creating a synergy that prevents large overlaps in
teaming influence. Ultimately, if this synergy exists, then the social influence exerted
by AI teammates on human perception manifests as strong perceptions of team co-
hesion. A lack of synergy, on the other hand, prevents strong perceptions of teaming
from forming. These perceptions would be highly beneficial to humans as they would
in turn increase the perceived effectiveness and performance of human-AI teams.
4.11.2.2 As Teaming Influence Transitions to Social Influence, Human
Perception of AI Teammate Helpfulness Becomes Nuanced.
The help and benefit provided by an AI teammate’s teaming influence was a
critical perception humans formed during the adaptation process. Moreover, this help
can be described in a variety of forms, including benefit, assistance, and positivity.
Regardless of the terminology, however, the teaming influence AI teammates present
139
creates social influence that can build perceptions of helpfulness in humans.
For instance, p04 and p15 explicitly associated teaming influence with being
helpful in their definitions of social influence:
My definition of influence is someone or something that helps you make a
decision. (P15, Female, 18, Asian)
Let me do my own thing enough to the point where it was like a nice
helping hand. (P04, Female, 19, Black, Asian, Caucasian, Pacific Islander)
Many participants mentioned that their AI teammate was influential in terms
of helping them and helping their team. Ultimately, the quality of teaming influence,
and in turn social influence, dictates how great the perception of help are. This is
especially important because humans want help from AI teammates and do not see a
reason for an AI teammate to be influential if they cannot perceive them as helpful.
Participants expressed these negative feelings when the AI did not help the team:
Probably the middle was the best because I didn’t really like how they
were not really helping in the beginning. (P05)
Interestingly, while the above quotes showed perceived help with regard to
the team performance as a whole, some participants viewed and valued the benefit
and help from the AI teammate only when it benefited or helped themselves. The
AI teammate is not viewed as beneficial if teaming influence does not benefit the
individual (e.g., pass the ball to the human) even if the AI teammate helped their
team score. This sentiment is illustrated by the following two quotes:
I feel like he could have been helpful. But sometimes as I was trying to
hit the ball, they just come into it. And they would have a chance. (P23,
Female, 18, Caucasian)
140
Whenever you’re in the actual soccer game, you help each other good to
go. Yeah, it really wasn’t trying to help me. They just kind of did it.
(P27, Female, 18, Caucasian)
This difference between team and individual help is especially interesting when
considering the potential for competing motivations in real-world teams. Tradition-
ally, AI teammates are going to be designed to complete a team’s task, and that
completion will often determine their utility from a technology perspective. However,
humans teammates may not weigh their evaluation of an AI teammate entirely based
on team task completion.
While human teammates may view benefit differently, what ultimately dictates
the level of perceived helpfulness is how much better the AI teammate was at a
desired task than the human. For instance, humans who wanted a teammate that
was good at shooting saw the AI as more helpful when they shot more, but humans
who preferred defensive teammates saw AI teammates that showed defensive skill as
the most beneficial. The following quotes iterative this comparative nature:
I think they were influential, that I kind of just stop trying to get the
goals much whenever they... Yeah, they’re better than me. (P27, Female,
18, Caucasian)
So depending on how good the teammate... he can have more or less
influence. (P02, Male, 18, Caucasian)
They’re a lot better than me. Yeah, I think they’re a lot better. So they
were part of the game more, because I wasn’t, you know, wasn’t as good.
(P06, Female, 20, Caucasian)
Ultimately, the analysis above demonstrates that AI teammate teaming influ-
ence can create perceptions of helpfulness in humans; however, these perceptions are
141
nuanced and dictated by a personalized definition of benefit. The better an AI team-
mate is at benefiting a humans’ personal motivation, the greater level of helpfulness
is perceived, which in turn justifies a greater level of teaming and social influence.
Thus, researchers should take note not to reduce AI teammate performance too much
as it may lower the perceived helpfulness of an AI teammate.
4.11.2.3 If the Transition From Teaming To Social Influence Does not
Proceed Smoothly, Frustration Ensues
While positive perceptions can build as AI teammate teaming influence be-
comes social influence, the potential for negative perceptions exists as well. More than
being unhelpful, poorly perceived AI teammate teaming influence can often lead to
frustration felt by the human teammate. Part of this frustration results from their
planned course of action getting disrupted by the highly influential AI teammate.
Greater frequencies of this disruption ultimately lead to a greater sense of frustra-
tion. For instance, p27 among others admitted that with more influential AI the
game felt annoying because the AI pushed her out of the way:
It was kind of annoying when it would push out like me on the way. (P27,
Female, 18, Caucasian
you’re running into me taking this away from me, I didn’t ask you for
that. (P04, Female, 19, Black, Asian, Caucasian, Pacific Islander)
I would be really annoyed with a teammate who would be taking the ball
out of my possession every single time, I’d rather have someone probably
game to this one where it was, they were still trying to get the ball in the
goal, but they were letting me do some of the work. (P26, Female, 18,
Caucasian)
142
This frustration essentially extends from a feeling of heightened competition
where it feels as though the AI teammate’s influence is working against the human
just as an opponent’s influence would. This sense of competition with the AI team-
mate (as opposed to collaboration) made participants feel frustrated with their own
performance and skill, especially when said performance and skill were inferior to
their AI teammate’s:
Robot was scoring a lot higher than me so I was fine with that. But I was
kind of annoyed. (P27, Female, 18, Caucasian)
I feel like they influenced me to want to actually touch the ball and do
better. But then I was also getting frustrated. (P22, Female, 18, Cau-
casian)
For less skillful individuals, strong AI influence that interfered with their ac-
tions went beyond general frustration with a teammate and actually led to frustration
with ones self. This type of frustration could be incredibly demoralizing to human
teammates as it may ultimately stagnant their growth. The following quotes echos
this sentiment:
I don’t think I ever felt frustration with the other car. It was just more
like, whenever he would take it I wouldn’t really care. Yeah, it would just
more be like frustration at myself. If I was trying to hit the ball and I
completely miss I’d be like, I suck. But I was never mad at my teammate.
(P29, Male, 18, Caucasian)
Unfortunately, if these perceptions of frustration do manage to arise in human
teammates, they can have direct impacts on not only their performance but even
their willingness to perform at all (i.e. their own personal teaming influence). Many
143
participants reported giving up their task knowing that they would not be able to
match or exceed the contributions of their AI teammates:
Most definitely, by that point I was kind of waiting for the timer to run out
because I just, this is kind of annoying, this doesn’t like low key, doesn’t
make me want to play more like that sort of situation. (P15, Female, 18,
Asian)
At its core, frustration is a fairly negative emotion that humans do not see
as having any tangible benefit in human-AI teams. Unfortunately, the existence
of frustration as a perception of AI teammate influence was repeatedly observed in
human participants. While the concept of frustration is not new to technology, these
results stem from the actual influence imposed by AI teammates on a shared goal. In
other words, the constant interactions within teaming might heavily reinforce these
feelings of frustration, creating a highly negative feedback loop. Moreover, the lasting
impacts of frustration may point to the need for methods to recover human morale and
performance after factors that facilitate frustration are experienced. As mentioned
above, disruption was the ultimate facilitator of this frustration and is explored in
more detail below.
4.11.3 Summary of Results
The results of this study show an optimistic story about how humans are
willing to not only meet their AI teammates halfway but almost fully consider AI
teammate teaming influence as social influence and adapt around it. The above re-
sults demonstrate that while social influence can exist on perception, humans are more
heavily concerned with how AI teammate teaming influence becomes social influence
on behavior. This disparity is highly interesting in that it highlights critical differ-
144
ences between RQ1.1 and RQ1.2. Specifically, humans take a proactive approach in
interpreting AI teammate teaming influence as social influence on behavior (RQ1.1),
but humans are more reactive in allowing teaming influence to impact perception by
creating perceptions during the adaptation process (RQ1.2).
For RQ1.1, humans showed more than a willingness - even an eagerness - to
adapt to AI teammates. While some participants ultimately became demotivated
and gave up, thus conceding to the AI teammate’s influence, behavioral changes in
humans were most often geared toward proactive adaptation. Moreover, this adap-
tation happens fast and continuously as humans largely see it as the responsibility of
the human teammate to adapt to the AI teammate. This preconceived notion enables
humans to enter into a human-AI team with the mindset of learning and adapting.
However, this adaptation does have some prerequisites, including the observation of
AI teammate performance and the perception of control within a human-AI team.
While these two requirements are not a given, they help to ensure humans adapt to
AI teammate influence in human-AI teams.
RQ1.2, results reveal that the perceptual changes that result from experienc-
ing AI teammate influence often center around the gaps that exist between human
and AI teammate influence. These gaps can promote levels of perceived teamwork
and helpfulness if AI influence is managed well. However, too large of gaps can ulti-
mately lead to perceptions of domination and frustration. Moreover, perceptions of
frustration increase in frequency and intensity when AI teammate influence impedes
human teammate influence. However, the general perceptions around AI teammate
influence were mostly positive with humans often seeing influence as a benefit to a
team.
145
4.12 Study 1b: Discussion
4.12.1 Implications for the Acceptance of AI Teammates
Importantly, the Technology Acceptance Model (TAM) is a cannon model
that relates the acceptance of new technologies through two core components: the
perceived utility of the technology and the ease-of-use of the technology [106]. In re-
gards to perceived utility, this study demonstrates both its relevance to AI teammates
in its findings and the importance of experience in facilitating perceived utility, which
is in line with current understandings of technology acceptance [431, 428]. However,
the concept of ease-of-use becomes more complicated to pin down when discussing
AI teammates, which are distinctly different from a technological tool that requires
manual use by a human. Fortunately, the TAM has been updated in the past to
provide more robust definitions of ease-of-use to increase its applicability to novel
technologies [452, 243], and the same may need to be done for AI teammates.
Specifically, this study demonstrates that the concept most critically linked to
AI teammates is not that of use but rather adaptation. Importantly, the considera-
tion of adaptation and technology adoption has received attention within workplace
information technology research, with a greater sense of perceived adaptability being
viewed as an important factor in driving technology adoption [52]. However, this
concept of adaptation becomes more complex when concerning the level of adapta-
tion needed to incorporate an AI teammate due to the social influence they impose
on teams and human teammates. Specifically, two key themes found in this study
demonstrate the uniqueness of this adaptation and will help extend the concept to ac-
commodate AI teammates: (1) humans need a semblance of control; and (2) humans
heavily consider the felt impact of AI social influence when adapting and accepting
it.
146
For theme (1), humans did not just need adaptation to be easy, but they
also needed to feel safe and comfortable doing so. Unfortunately, while the concept
of control is important to technology acceptance [450], it becomes a more difficult
concept in AI teammates as AI teammates are required to have a level of autonomy
and independence to even be considered teammates as opposed to tools [329]. Sim-
ilarly, the uniqueness of theme (2) stems from the fact that the social influence of
AI teammates will ultimately lead to behavioral change and impact because that is
a critical component of social influence in teaming [70, 209]. Thus, these concepts
cannot actually be tackled from a perspective of maximizing control and minimizing
disruption, as that would make AI teammates non-existent and useless. Rather, the
concept of ease-of-use and ease-of-adaptation need to holistically consider prominent
factors - such as a sense of control and a sense of disruption - from the perspective of
achieving a balance or chemistry between human teammates and AI teammates.
The above addition to ease-of-use should also consider past recommendations.
For instance, information transparency [13] and good user interface design [237] would
all still be pertinent as they would help humans adapt around AI teammates. Ad-
ditionally, some of these design components of ease-of-use (e.g. transparency) may
become even more important in human-AI teaming due the the black box reputation
that AI teammates need to overcome [82]. Thus, moving forward, this would mean
that AI teammates have good ease-of-use from a technology angle while also having
good ease-of-adaptation from a teaming perceptive.
147
4.12.2 Reevaluating the Pursuit of Human-Human Teaming
Concepts in Human-AI Teams
Participants believed it is the responsibility of humans to adapt to the so-
cial influence of AI teammates, and not the AI teammate to adapt to human social
influence. For instance, the responses of participants p22 and p05 about being “pro-
grammed to do one thing” and understanding “what they’re doing... better than they
(the bot) can” paint a picture of how humans see adaptability as a uniquely human
quality, while the most valuable AI teammate quality was a level of consistency that
facilities adaptation. However, this provides an interesting complication for human-
AI teaming as teammates and teams need to be able to adapt to environmental and
task changes as well as other teammates [65, 87, 384, 74]. Without this adaptation,
teams often fail when they encounter errors or even mundane changes that impact
day-to-day decisions [272, 246]. However, this study’s findings indicate that human
teammates perceive the burden of adaptation to be more readily taken up by humans
because they are perceived to be better at adapting than AI.
Interestingly, this finding indicates that the field of adaptive autonomy, which
provides a large contribution to human-AI teaming research [156, 445], may need to
pivot in their conceptualization and communication of the concept of what it means
to be adaptive. In other words, from a human perspective, adaptation may not
be a one-to-one translation between human-human teaming concepts and human-AI
teaming concepts, thus meriting a reconsideration of research approaches. However,
this does not mean that AI teammates should not become adaptive, as task and
environment details are bound to change [345, 415]. Rather, the conception and
research of AI teammate adaptation needs to be done in light of human teammate
adaptation. Moreover, due to the different ways AI teammate social influence impacts
148
humans and how humans perceive that social influence, the creation of adaptive
autonomous teammates should not be done in light of how human-human teams
adapt but how humans will adapt in human-AI teams. Therefore, by deploying
this approach, the ideal form of AI teammate adaptation may not entirely mimic
our concept of adaptation from a human perspective, indicating that the pursuit of
human levels of adaptation would not be ideal.
Building on the above discussion, while past research has shown that humans
want “human-like” behavior in their AI teammates [480], the results of this study
show that there may actually be “human-like” skills that need to reach an ideal
level or perspective that is different from the human-human ideal. In addition to
adaptive autonomy, the results of this study may also inspire the reevaluation of other
potentially “human” concepts being pursued by human-AI teaming research. For
instance, the concepts of team cognition and communication, which are cornerstones
of teamwork [244, 97], have also been pursued in the fields of human-AI teaming
[390, 250]. However, chasing the standard of human-human team cognition or human-
human communication may not take good use of either human or AI potential, which
should be the ultimate pursuit of human-AI teaming research. Thus, given that
those concepts are important to teaming, human-AI teaming research should focus
on determining whether a teaming factor in a human-human team should be the
ultimate goal, or if that goal should be modified to best utilize both human and AI
teammate skillsets.
149
4.12.3 Design Recommendations
4.12.3.1 Override Mechanisms for Agent Teammates Should Exist, and
Human Teammates Should be Trained to Use Them.
Based on this study’s findings, it is recommended that methods of AI team-
mate override and control are always implemented alongside AI teammates to provide
humans with this sense of control. While past research has already stated that humans
need control in human-AI interaction [21], this design recommendation specifically re-
quires that humans receive granular control on the actions of their AI teammates. As
an example, if an AI teammate were designed to make a decision, human teammates
should be given the ability to override that decision whenever they see fit. Moreover,
given that participants p09 pointed out examples of control mechanisms that they
have experience with, such as steering wheels, it is not enough for these mechanisms
to be available, but humans also need to feel confident in their ability to use them.
Extending the above example, human teammates could be provided a button along-
side each AI decision that allows them to override it, and they would need training
in using said button.
This design recommendation has an increasing importance when considering
the goal of raising the autonomy of AI teammates, which would in turn provide
AI teammates with a greater level of control over their own actions and behaviors
[377, 329]. Thus, ensuring that humans are accepting of increases in autonomy would
ultimately necessitate the implementation of mechanisms that provide a perception of
human control. Therefore, while the existence of override mechanisms may momentar-
ily inhibit AI performance, their existence would increase long-term team performance
by facilitating AI teammate acceptance. Moreover, the usage of these mechanisms
by human teammates would ideally decrease over time, but the comfort provided by
150
their continued existence is what would facilitate that decrease.
4.12.3.2 Before “Hiring” an AI Teammate, Humans Should Shadow a
Potential AI Teammate Working in Another Team.
One of the most common factors why participants were able to adapt to AI
teammates was that they observed and experienced AI teammate behavior, as illus-
trated by p16’s quote about “it’s just observation.” While other research domains
suggest the use of trial periods to encourage technology usage [334], but this would
be more difficult for AI teammates as their incorporation would change human roles,
which would be too substantial of a change for a short-term trial period. Thus, the
solution to this challenge would be to allow humans to shadow or observe potential
AI teammates operating inside another team before they have to make a decision on
whether or not to integrate and adapt to them. Unfortunately, this may also be diffi-
cult as one team has to be the first team that everyone else shadows. Therefore, this
design recommendation also posits that demonstrations of human-AI teams should
be made with the explicit purpose of observation.
From the perceptive of teaming, this concept of AI teammate shadowing could
be viewed as an adaptation of the interview process for AI teammates. Interviewing
is a critical and often social component of modern teaming as teams need to get a feel
for potential teammates before undergoing the process of integrating them into their
team [297]. Generally, the methods used to interview human teammates would not
be ideal for evaluating AI teammates, as interviews often center around hypothetical
behavior discussion that assess personal attributes such as culture and personality
[423, 340]. Unfortunately, unlike humans, AI teammates are programmed to do a
specific job and not to participate in theoretical discussions tangential to their ac-
tual job performance. Thus, understanding their ability, performance, and behaviors
151
within another human-AI team would allow humans to effectively “interview” the AI
teammate and understand how they would better fit within their team.
4.12.3.3 AI Teammates Should Elicit and Learn from Feedback from Hu-
man Teammates on if They are Disrupting Team Norms and
Goals.
Disruptions caused by AI teammate teaming influence were a key factor of
consideration when humans were determining whether they would accept and adapt
to AI teammate teaming influence, which means AI teammates should be designed
to not disrupt existing team processes. However, this may not always be possible to
guarantee when designing AI teammates as team norms and processes can often be
highly unique and personal to those teams [433, 339]. Thus, in addition to making
an effort to design AI to not be disruptive, AI teammates should also elicit feedback
on whether or not they are disrupting teams norms and processes. As an example,
an AI teammate might access a shared data resource that prevents one from using it
right before a daily team meeting, which would prevent the human teammates from
getting the information they need from the said resource. While this disruption may
seem mundane, it may cause humans to reject the potential help or social influence
an AI teammate is providing.
This design recommendation has important implications for where the con-
cept of adaptive autonomy may be more effectively used. Rather than adapting to
the behaviors of individual human teammates in an effort to promote coordination
[2], adaptive autonomous teammates may be more efficiently used by adapting and
tweaking behaviors based on the disruption of existing team processes. Importantly,
group dynamics and processes have been noted as an important consideration for AI
teammates [397]. However, the above design recommendation is specifically targeted
152
not towards general consideration of group dynamics and teammates but rather un-
derstanding the disruption of existing team processes to reduce frustration, which is
bound to happen with the introduction of AI teammates.
4.12.4 Limitations and Future Work
While the findings of this article are critical to understanding humans and
human-AI teams, there are still limitations within this study that provide important
avenues for future research to investigate. The core limitations within this study
center around (1) the observation of dyad teams as well as (2) the age range present
in the sample interviewed. For (1), dyad teams are not wholly representative of the
modern teaming landscape or potential human-AI teams which will often incorporate
multiple human or multiple AI teammates. However, this study views this limitation
as a necessary one as the perception and effects of AI teammate social influence must
first be understood and then expanded upon to further consider other potential influ-
ences in human-AI teams, such as other AI teammates, other human teammates, or
even organizational social influence outside of the team. Thus, future research should
explore social influence beyond the presentation of human-AI dyads by exploring more
complex teaming environments. Doing so will further enable human-AI teaming re-
search to more closely resemble the modern state of teaming as well as the future state
of human-AI teaming in the real-world. For limitation (2), a younger population may
provide uniquely different opinions on the acceptance of AI teammate social influence
than older populations that have generally less acceptance towards newer technolo-
gies. However, from a workplace perceptive, the population sample interviewed in
this study do represent a population that is entering a highly digital workforce. This
future workforce is likely to experience the social influence of AI technology in their
153
jobs with the potential to experience the initial integration of AI teammates. For this
reason, the perspectives and actions of this generation should not be discounted as
they provide a highly relevant opinion to modern teaming. Instead, future research
should add on to this understanding by exploring populations, such as older adults,
that may have uniquely different reactions and perceptions of AI teammate social
influence given a variety of lived experiences and preexisting perceptions. Doing so
would provide researchers with a greater understanding of how AI teammate social
influence may be received within higher level or more experienced positions within
companies, which are often occupied by older individuals.
154
Chapter 5
Study 2: Examining the
Acceptance and Nuance of AI
Teammate Teaming Influence From
Both Human and AI Teammate
Perspectives
5.1 Study 2: Overview
While measuring and observing how changes in AI teaming influence can im-
pact human-AI teams is important, the successful creation of human-AI teams is much
more involved than simply giving a team an AI teammate as it requires a prospective
level of acceptance to begin the initial integration of an AI teammate. Although other
studies in this dissertation, and other studies within human-AI teaming, have utilized
experimentation to observe human-AI teams that have already been formed and as-
155
signed a task, there is a critical need to explore the actual formation of these teams,
especially from the perspective of teaming influence. While humans may welcome AI
teammate teaming influence once experiencing it in a human-AI team, humans may
have initial apprehension towards AI teammates and the teaming influence they will
impose. In other words, the factors that contribute to the acceptance or susceptibility
to initial AI teammate teaming influence need to be further understood.
The results of Study 1 paint a picture that communicates how nuanced and
individualized humans’ interpretation of teaming influence can be. Thus, Study 2
empirically examines if any identifiable factors can be used to identify the acceptance
of teaming influence and if AI teammate design can be modified to encourage accep-
tance. This acceptance is investigated from both the AI and the human side where
we explore how susceptible humans may be to AI teammate teaming influence and
how we as researchers and practitioners could increase the potential persuasiveness of
AI teammate teaming influence through human-centered design. Specifically, 2 sub-
studies (2a & 2b) are used to explore these concepts, and both explore the acceptance
of varying levels of AI teammate teaming influence. However, Study 2a examines if
the acceptance and adoption of AI teaming influence can be mediated by the identity
of an AI (tool or teammate), and Study 2b examines how the presentation of an AI
teammate’s capabilities can improve adoption and acceptance.
Based on the above overview, dissertation-wide RQ3 is answered by Study 2a
and 2b collectively. Additionally, the structure of Study 2a and 2b will not mimic that
of Study 1a and 1b as the sub-studies of Study 2 answer the same dissertation-wide
research question, but Study 1a and 1b focused on two different dissertation-wide
research questions. As such, the analysis of Study 2 will first provide an analysis
of experimental (persuasion) results followed by an analysis of individual difference
(susceptibility) results.
156
5.2 Study 2a: Research Questions
The first sub-study within Study 2 examines how people will interact with an
AI when completing a singular task with a human teammate, and this AI will be
presented as a teammate or tool. As the growth and implementation of AI will be
iterative, it is important to understand where humans may have hesitation towards
that growth. In other words, it is important to understand the degree to which
humans will accept AI teaming influence. Thus, Study 2a presents more granular
levels of teaming influence while also contextualizing them in a real-world task. Doing
so allows the contributions of this study to not only identify how much teaming
influence humans will be comfortable with but also contextualize that amount in a
real-world example rather than as abstracted preferences in a video game.
Based on the above considerations, the following research questions serve as
the focus for Study 2a:
RQ3.1 What do humans see as the ideal distribution for teaming influence between
human and AI teammates when completing a singular shared task?
RQ3.2 Will AI’s presentation as a teammate as opposed to a tool impact humans’
potential acceptance of its teaming influence?
5.3 Study 2a: Methods
Study 2a focused on answering RQ3.1 and RQ3.2. Study 2a utilized a factorial
survey design, which allows experiments to be conducted through the presentation
of vignettes describing scenarios [196, 32]. This has been shown to be an effective
methodology to evaluate early human-AI interaction [249]. Given that this study
examines perceptions prior to interaction, a factorial survey where participants are
157
presented with an AI without interaction was determined to be an ideal method of
isolating and observing the effects of AI teammate presentation. For RQ3.1, Study 2a
examined how the division of teaming influence, represented by shared responsibility,
across a single task assigned to both humans and AI can impact human perception
[481, 35]. For RQ3.2, Study 2a focused on if presenting the identity of an AI as a tool,
as opposed to a teammate, can impact human perception, which has been theorized
but not empirically confirmed by past research [402]. Study 2a operationalized both
of these concepts within the context of human-AI teaming for software development,
specifically examining the task of code writing. This operationalization is both justi-
fied and relevant for three reasons: (1) human-AI collaboration for code completion is
rapidly advancing and projected to be utilized in the near future [309, 408], (2) par-
ticipants familiar with the domain are more likely to have experience and knowledge
of computational systems like AI, and (3) text completion could be a task completed
by either AI teammates or highly skilled AI tools, which makes the proposed manip-
ulations more realistic.
5.3.1 Recruitment and Demographics
Participants for this survey were recruited through the Prolific survey distri-
bution platform, which allows rapid and high-quality survey completion. Only par-
ticipants located in the United States were allowed to complete the survey. Limiting
participants to the US, while somewhat of a limitation to generalizability, provided a
means of controlling the potential perceptions participants had. Perceptions regard-
ing both technology and teaming are known to be impacted by cultural differences
[79, 432, 317], and using too broad of an audience could provide too much noise for the
quantitative design of this work. Future work would heavily benefit from explicitly
158
exploring the difference between cultural groups in this topic area. To ensure partic-
ipants’ opinions were relevant to the assigned task, the subject pool was limited to
individuals who primarily work in software and information technology industries. In
total, 214 participants completed the survey, the survey was designed to take 15 min-
utes to complete, and each participant was paid $2.63 for completion. Participants
were asked three directed attention check questions, which had participants mark a
specified answer [1], and participants were only awarded credit if they answered two
of these three questions correctly. In total, five participants failed attention checks
and were excluded from this study. The average age of participants was 35.80 years
old (SD = 9.30), and the average survey completion time was 17.02 minutes (SD =
9.93). Total demographic information can be found in Table 5.1.
Gender
Male Female Non-Binary Prefer not to say Prefer to Specify
142 62 4 1 0
Race
White Black or African American Latino or Hispanic Asian Native American or Alaskan Native Multicultural Not Specified
145 21 8 20 1 13 1
Education Level
High School Graduate Some College Associate’s Degree Bachelor’s Degree Master’s Degree Doctoral Degree
13 38 19 107 30 2
Table 5.1: Study 2a Demographic Information
5.3.2 Experimental Design
The experimental design for this study utilized two manipulations. Both of
these manipulations and their theoretical underpinnings are described below, and
their presentation is described later when describing the presentation and content
of the survey itself. These manipulations are succinctly represented in table 5.2.
Importantly, these two conditions are studied together as it is theorized that humans
could have different opinions for AI responsibility based on if that AI is a teammate or
159
tool due to how responsibility manifests different with tools and teammates (discussed
below).
5.3.2.1 Manipulation 1: AI & Human Responsibility
The first manipulation for this study, which focused on answering RQ3.1,
was the manipulation of responsibility of a singular programming task assigned to
either AI or humans. The theoretical underpinning of this manipulation is derived
from research in both AI and teamwork fields. First, AI research has commonly
explored the level of autonomy for AI, as higher levels of autonomy often lead to an
AI completing more of a task and a human completing less [332]. Second, the division
of labor in teams is critical and teams often have to determine how to best divide
labor to best utilize skills, knowledge, and time, and this division can potentially
lead to one teammate being responsible for the majority of a task [36]. In merging
these two underpinnings, manipulation 1 was created, which has an AI increase their
level of autonomy in a way that directly impacts the workload they are responsible
for as well as their human teammate’s workload. Importantly, the field of human-AI
teaming has already begun to link these two concepts theoretically [329], and this
survey represents the first empirical exploration of this merger.
Manipulation 1 is a within-subjects manipulation that has seven condition
levels. Each level has the AI and human having an increasing and decreasing amount
of responsibility on a singular task, respectively. This is operationalized as the AI is
responsible for X% of a code writing task and the human is Y% responsible, with X
and Y summing to 100%. It was chosen to present this manipulation in a within-
subjects manner to allow humans to better differentiate and compare the potential
divisions of responsibility. Pilot testing was performed to ensure humans distinguished
these condition levels.
160
5.3.2.2 Manipulation 2: AI Identity
While this work is overall concerned with AI teammates, manipulation 2 in
Study 2a does examine the effect of this label, in comparison to the more common tool
label. The second manipulation of this work, which centers around RQ3.2, examines
the impacts that identifying an AI as a teammate or tool can have on perception.
This manipulation is derived from past work that has theorized that the teammate
identity will negatively impact human perception due to teammates having higher
expectations than tools and AI having limitations preventing them from meeting
those expectations [402]. However, the teammate identity can be a benefit too as it
could signal to humans that they will be interdependent and collaborative with this
AI, which are good expectations to have [401]. As such, manipulation 2 examines
if the simple identification of an AI teammate can impact human perception due
to these expectations. Manipulation 2 is a between-subjects manipulation with two
conditions. The AI humans are presented is identified as either a teammate or a tool.
5.3.3 Procedure & Survey Structure
The individual steps within the provided survey are detailed below. Before
larger data collection, this survey was initially piloted twice. The first round of
piloting was used to increase the understandability and visibility of the manipulations,
which resulted in minor changes being made. The second round of piloting ensured
the survey was technically reliable.
5.3.3.1 Informed Consent & Pre-Surveys
Upon clicking the survey link, participants were immediately provided with
an informed consent letter that could be agreed to at the bottom. Then, participants
161
Manipulation 1: Responsibility of
AI Teammate and Human Teammate (Within)
Operationalized by % of Code to be Written by Teammate
Condition # AI Teammate Human Teammate
1 5% 95%
2 20% 80%
3 35% 65%
4 50% 50%
5 65% 35%
6 80% 20%
7 95% 5%
Manipulation 2: Identity of
AI Teammate (Between)
Teammate Label
Tool Label
Table 5.2: Study 2a Experimental Manipulations, creating a 2x2 experimental design.
Manipulation 1 is a within-subjects manipulation with seven conditions presented in
a randomized order. Manipulation 2 is a between-subjects manipulation with two
conditions randomly assigned to participants.
completed a variety of pre-surveys, including demographics, job domain identification,
and some individual differences surveys, which are not the focus of this article. The
demographic and job domain information was used to verify the job domain filtering
done with prolific, and participants were not allowed to complete the survey if their
answers did not match the information selected in their profile.
5.3.3.2 Introduction of AI
After these pre-surveys, the survey introduced participants to the context of
the task and the AI that would be helping them with the task. This introduction
simply told participants they would be working with an AI to complete a software
development task. This description was intentionally left open-ended to allow partic-
ipants to form their own expectations simply based on the identity proscribed to the
AI teammate or tool. The creation of this introduction underwent multiple revisions
162
to ensure that the text did not bias AI identified as either teammates or tools.
5.3.3.3 Vignette Structure
Given that manipulation 1 of this study was a within-subjects manipulation,
this study presented seven vignettes to participants, one for each condition level.
These vignettes contained a brief reminder of the context, a table that told partici-
pants how much of the code they and their AI were responsible for, and five, 7-point
Likert-style questions (detailed below). The structure of these vignettes stayed con-
sistent across all seven with only minor content changes being made based on the as-
signed manipulations (detailed below). For readability and reproducibility purposes,
the vignette used for Study 2a has been recreated as Table B.20 in the Appendix.
5.3.3.4 Operationalization of Manipulations
The manipulations for this study were operationalized through text-based
changes in the survey. Manipulation 1 was operationalized by changing the values
presented in the table within each vignette based on the condition levels shown in
Table 5.2. To reduce/normalize spillover effects, the table where these values were
presented was emphasized through bold text, and the presentation of these conditions
was randomized.
Manipulation 2 occurred during both the introduction of the AI and within
each vignette. When introducing the AI, it was introduced as either a teammate
or a tool, but no other information in the introduction was changed based on the
identity. Within the vignettes, the AI was either referred to as a tool or a teammate.
Additionally, given that this study is focused on adoption and acceptance, language
in survey questions and the vignettes also differed based on the condition to ensure
that verbiage was appropriate for the chosen identity. Specifically, participants were
163
asked if they would “use” the “tool” in their team or “accept” the “teammate” into
their team. This addition also served to increase the strength and visibility of the
manipulation. Assignment of a manipulation 2 condition was randomized between
participants through weighted randomization, and participants assigned condition
remained consistent across vignettes.
For a visual representation of how these manipulations were directly opera-
tionalized within the survey, please refer to Table B.20 in the Appendix.
5.3.3.5 Individual Vignette Measures
Each within-subjects vignette was accompanied by five questions, and these
questions were designed based on factors that are critically important to both the
adoption of new technology and the hiring of new teammates. Topics covered by
these questions ranged from people’s likelihood to adopt a teammate to the potential
threat to job security the AI teammate posed. While each of these questions provides
an interesting conclusion in isolation, their holistic consideration provides a detailed
understanding of how prospective AI teammates could impact perceptions in future
workplaces. While the questions were modified based on the identity manipulation,
they did not vary based on the within-subjects responsibility manipulation, but they
did appear on the same page as the manipulation, and participants could freely scroll
back up to view the presented responsibility. The specific questions asked along with
the research used to derive said questions can be found in Table 5.3.
164
Post-Scenario Questions
Measurement Factor Question Research Relevance
Capability of AI I think the AI [Tool,Teammate]
would be capable of performing
these responsibilities.
Perceived Performance
[253]
Helpfulness of AI If I were to [use,accept] the AI
[Tool,Teammate], it would be help-
ful to my team.
Perceived Utility [106]
Helpfulness of Self If I were to [use,accept] the AI
[Tool,Teammate], my teammates
would still benefit from my skillset.
Employability [51]
Job Security If my team were required
to [use,accept] the AI
[Tool,Teammate], I would feel
concerned for my job security.
(Reverse Coded)
Job Security [51]
Adoption Likelihood I would be likely to [use,accept] the
AI [Tool,Teammate].
Intent to Use [106]
Table 5.3: Post-Scenario questions shown after each vignette. Questions were pro-
vided a seven-point Likert scale, Strongly disagree Strongly Agree.
165
5.4 Study 2a: Experimental Results
The following results are organized based on the five measurements taken in the
factorial survey vignettes. Given the within-between-subjects design, a cumulative
link mixed-model was used for analysis, which allowed the main effects to be examined
while also controlling for multiple measures being provided by a single participant.
This analysis methodology is highly similar to linear mixed-models, but it is robust
for outcome variables that are single-item Likert scales as they are viewed as ordered
logistic outcomes. This method also provides a means of assessing linear trends rather
than simply comparing condition levels, which is ideal given the type of manipulations
used by this study. Given the above similarities to linear models, results reporting will
follow a structure similar to linear mixed-models. Given that the manipulation of AI
responsibility provided an equal distance between condition levels, it is represented
by a numeric value of 1-7 in the linear model. Tables that detail the significance of the
models created are provided, and graphs that visualize the descriptive means for each
condition level of responsibility are provided. Additionally, note that models that
contain non-significant effects will be reported, but the model used for the analysis
of fixed effects will be noted in each table.
5.4.1 Capability of AI Teammate or Tool to Complete Re-
sponsibility
For participants’ perceived capability of the AI system, we first ran a model
with only a random intercept, and then added the responsibility level of the AI, the
identity of the AI (teammate vs. tool), and their interaction. The responsibility of the
AI significantly improved the linear model created for participants’ perception of AI
capability, but the effect of identity and the interaction effect between responsibility
166
Figure 5.1: Figure of the capability of AI system to complete responsibility based on
responsibility and identity. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Perceived Capability (1—pid)
* + AI Responsibility 110.40 1 < .001
+ Identity .04 1 .836
+ AI Responsibility:Identity .43 1 .513
*Denotes model used for analysis of fixed effects.
Table 5.4: Linear model for effects of conditions on the perceived capability of AI to
complete workload. Each model is built upon and compared to the one listed above
it.
and identity did not significantly improve the model (Table 5.4). Analysis of the
selected model’s fixed effects revealed a significant effect of responsibility (β = -.28,
t(1252) = 10.33, SE = .03, p = < .001) on perceived capability. The effect size of
responsibility (d = .58) denotes a medium effect size. Figure 5.1 shows that with
increased responsibility for the AI, participants tend to increasingly doubt its ability
to actually fulfill the task. This finding is fairly obvious especially when one considers
the technologically inclined sample population, but this finding may become more
interesting as the effects of job security and adoption are examined.
167
Figure 5.2: Graph of the potential helpfulness of AI based on responsibility and
identity. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Helpfulness of AI (1—pid)
+ AI Responsibility .47 1 .493
* + AI Responsibility
2
31.70 1 < .001
+ Identity .75 1 .586
+ AI Responsibility : Identity 1.41 1 .235
+ AI Responsibility
2
: Identity .12 1 .732
*Denotes model used for analysis of fixed effects.
Table 5.5: Linear model for effects of conditions on potential helpfulness. Each model
is built upon and compared to the one listed above it.
5.4.2 Potential Helpfulness of AI Teammate or Tool
The responsibility of the AI significantly improved the linear model created for
potential AI helpfulness, but the effect was found to be quadratic, which means both
a linear representation of responsibility and a squared representation of responsibility
are included in the significant model (Table 5.5). Additionally, neither the effect
of identity nor the interaction effect between responsibility and identity significantly
improved the model (Table 5.5). Analysis of the selected model’s fixed effects revealed
a significant effect of responsibility (β = 0.648, t(1252) = 5.34, SE = .12, p =
< .001) and squared responsibility (β = -.08, t(1252) = 5.61, SE = .01, p = <
.001) on perceived helpfulness. The effect size of responsibility (d = .30) and squared
168
responsibility (d = .32) reflect a medium effect size for each [92]. Figure 5.2 shows that
the perceived helpfulness of an AI is lowest when the AI has very little responsibility,
increases when the AI tool shares between 20% and 50% of the responsibility, and
drops down again when its responsibility increases further. However, despite this
trend, perceived helpfulness generally stays positive throughout increasing levels of
responsibility.
Figure 5.3: Graph of potential benefit of self alongside AI system based on responsi-
bility and identity. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Helpfulness of Self (1—pid)
+ AI Responsibility 571.09 1 < .001
+ Identity .51 1 .473
* + AI Responsibility:Identity 4.76 1 .029
*Denotes model used for analysis of fixed effects.
Table 5.6: Linear model for effects of conditions on one’s own perceived benefit. Each
model is built upon and compared to the one listed above it.
5.4.3 Potential Helpfulness of Self Alongside AI Teammate
or Tool
The responsibility of the AI significantly improved the linear model created for
the potential helpfulness of one’s self, and the effect of identity did not significantly
169
improve the model. However, the interaction effect between responsibility and identity
did significantly improve the model (Table 5.6). Analysis of the selected model’s fixed
effects revealed a significant main effect of responsibility (β = -.78, t(1252) = 17.34,
SE = .0.04, p = < .001) and a significant interaction effect (β = 0.12, t(1252) = 2.18,
SE = .0.05, p = .030) on the perceived benefit of self with a large (d = 0.98) and
small (d = 0.12) effect size, respectively. Figure 5.3 shows that the perceived benefit
that one thinks one can provide decreases with how much responsibility one has on a
shared task compared to an AI, but there can be a gap in these perceptions between
teammates and tools when an AI has very little assigned responsibility.
Figure 5.4: Figure of job security when working with AI based on responsibility and
identity. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Job Security (1—pid)
+ AI Responsibility 758.94 1 < .001
* + Identity 4.57 1 .03
+ AI Responsibility:Identity .06 1 .800
*Denotes model used for analysis of fixed effects.
Table 5.7: Linear model for effects of conditions on job security. Each model is built
upon and compared to the one listed above it.
170
5.4.4 Job Security Alongside AI Teammate or Tool
Both responsibility of the AI and the identity of the AI significantly improved
the model, but the interaction effect did not (Table 5.7). Analysis of the selected
model’s fixed effects revealed a significant effect of responsibility on job security (β =
-.83, t(1252) = 24.27, SE = .03, p = < .001). Additionally, the effect of identity was
significant (β = -.73, t(207) = 2.15, SE = .34, p = .030) for participants’ job security
as well. The effect size of responsibility (d = 1.37) reveals a large effect size, while the
effect size of identity (d = .30) denotes a medium effect size [92]. Figure 5.4 shows that
participants perceived less job security when an AI is assigned a larger proportion
of the responsibility and perceived job security is higher when the AI is identified
as a tool rather than a teammate. This effect is especially interesting when one
considered its size compared to the other effects reported in this study. Specifically,
examining Figure 5.4 one can see that as AI responsibility increases, perceived job
security actually goes from being a positive perception to a fairly negative perception
of job security on the 7-point scale. This comparatively larger effect is critical as it
demonstrates that an AI’s responsibility could have demonstrably stronger effects on
perceived job security than other perceptions.
5.4.5 Likelihood to Accept/Adopt AI Teammate or Tool
The linear and quadratic effect of responsibility of the AI teammate signifi-
cantly improved the adoption likelihood model, but the identity did not significantly
improve the model (Table 5.8), but the interaction between linear responsibility and
identity did significantly improve the model (Table 5.8). Analysis of the selected
model’s fixed effects revealed a significant linear effect (β = 0.47, t(1251) = 3.72,
SE = .13, p = <.001) and squared effect (β = -.09, t(1251) = 6.04, SE = .01, p
171
Figure 5.5: Graph of likelihood to adopt AI based on teammate responsibility and
identity. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Adoption Likelihood (1—pid)
+ AI Responsibility 151.22 1 <.001
+ AI Responsibility
2
36.36 1 <.001
+ Identity 2.14 1 .143
* + AI Responsibility : Identity 6.52 1 .011
+ AI Responsibility
2
: Identity 1.52 1 .218
*Denotes model used for analysis of fixed effects.
Table 5.8: Linear model for effects of conditions on likelihood to adopt. Each model
is built upon and compared to the one listed above it.
= <.001) of responsibility on adoption likelihood showing that as the AI is assigned
more responsibility adoption likelihood rises, but then falls off as responsibility more
heavily favors AI over humans. Additionally, the linear effect of responsibility and
identity had a significant interaction effect on adoption likelihood (β = -.13, t(1251)
= 2.55, SE = .05, p = .011), showing that the difference between the tool condi-
tion and the teammate condition in terms of likelihood increases in favor of the tool
condition when the AI is assigned more responsibility. The effect size (Cohen’s d)
of responsibility level (d = .21) and squared responsibility (0.34) signals a medium
effect size, while the effect size of the interaction (d = .13) indicates a small effect
size.
172
Figure 5.5 shows that humans’ adoption likelihood first increases as AI gains
more responsibility, but generally declines as AI are assigned demonstrably more
responsibility. This main effect is qualified by the interaction effect, which indicates
that the tool label benefits humans’ likelihood to adopt an AI more when said AI is
given a large degree of responsibility of a shared task. However, it is important to
note that the effect sizes indicate that responsibility does have a greater impact on
adoption likelihood than identity.
5.4.6 Summary of Study 1 Results
Study 1’s results paint an intriguing picture of how the responsibility AI sys-
tems are assigned is far more important than the way they are identified. While the
effects of AI system responsibility were significant for every single perception mea-
sured, the identity only significantly impacted one’s own perceived helpfulness, job
security, and adoption, but even those effects had a small effect size compared to AI
responsibility, making it less of a concern.
An interesting juxtaposition concerns adoption likelihood and helpfulness to
the rest of the measurements. One might assume these measurements to follow similar
trends; however, adoption likelihood and helpfulness follow quadratic trends where
they are highest when humans and AI share similar levels of responsibility, but other
measures generally trend downwards and are at their highest when AI responsibility
is at its lowest. These findings lend credence to the idea that utility and helpfulness is
critical to technology adoption [106], AI teammates included. However, these results
show that not all perceptions follow this trend, and the design of AI teammates needs
to consider all of these perceptions in concert.
Additionally, we see that AI systems assigned large portions of shared work
173
can create job security concerns and internal doubts about one’s own helpfulness,
even if humans feel skeptical about the AI system’s ability to fully accomplish its
responsibility. Additionally, we see that while the helpfulness of an AI teammate de-
clines as responsibility increases, they stay relatively positive, but adoption likelihood
and job security consistently drop to relatively neutral and negative perceptions, re-
spectively. This finding shows that even if humans’ confidence in an AI’s capabilities
decreases due to a level of increased responsibility, their willingness to accept an AI
teammate and their perceived job security decline at a disproportional rate. Based
on these findings, we advise researchers and practitioners to scale back the amount of
work assigned to an AI teammate, regardless of how technologically advanced said AI
teammate becomes. Finally, the significant findings related to identity suggest that
an AI system could strategically be identified as a tool to dampen the potentially
negative impact of AI responsibility on human perception.
174
5.5 Study 2b: Research Questions
Similar to Study 2a, Study 2b examines the role of individual differences and
nuance in human-AI teaming influence. However, rather than studying AI teaming
influence in a single task, Study 2b looks at role interdependence in human-AI teams
by having AI teammates complete a varying number of tasks. Additionally, Study
2b will take a more critical look at the design of AI teammates to see if there are
any ways human-AI teams can be designed to encourage humans to accept them at
a greater rate. Study 1 produced a large variety of design recommendations based
on participants’ having different reasons as to why they accepted and adapted to
AI teammate teaming influence. Thus, Study 2b also examines the potential design
recommendations suggested by participants in Study 1 among others to examine if
they benefit humans’ perceptions.
Based on the above motivations, the following research questions serve as the
driving focus of this study:
RQ3.3 What do humans see as the ideal distribution for teaming influence between
human and AI teammates when completing a multitude of shared and related
tasks?
RQ3.4 How can endorsements of an AI teammate’s capabilities be included in human-
AI teams to promote susceptibility for AI teammate social influence?
5.6 Study 2b: Methods
While Study 1 focused on responsibility and identity, Study 2b turns its focus
towards how to present the capabilities of AI teammates to better human perception.
Importantly, given that this article is broadly focused on human-AI teamwork and
175
study 1 focused on the presentation of identity, this study will only focus on AI
teammates with the goal of improving their specific perception. To accomplish this,
Study 2b differed in two key ways from Study 1. First, Study 2b furthered the
understanding of RQ3.3 by examining shared responsibility across multiple tasks,
which is likely to occur in future human-AI teams [348, 252]. Second, Study 2b focused
on RQ3.4 by examining how presenting an AI teammate’s capability through various
endorsements could impact human perception. These two expansions created by
Study 2b further our understanding of RQ3.3 and provide a critical answer to RQ3.4
by providing actionable ways researchers and designers can improve the perceptions
humans have of AI teammates that take on a substantial amount of responsibility in
human-AI teams.
5.6.1 Recruitment
Recruitment procedures were similar to Study 1: the Prolific survey platform
was used, participants were restricted to the United States, and the subject pool was
limited to those who work in industries related to software and information technol-
ogy. In total, 303 participants completed the survey, the survey was designed to take
20 minutes, and each participant was paid $3.50 for survey completion. Multiple
attention checks were administered during the survey. Participants were not compen-
sated if they failed two of the three attention checks, and their data was not used for
analysis. In total, six participants failed attention checks or completed the survey in
an unreasonably short time and were excluded from this study. The average age of
participants was 33.59 years (SD = 10.04), and the average survey completion time
was 20.44 minutes (SD = 36.84). Further demographic information can be found in
Table 5.9.
176
Gender
Male Female Non-Binary Prefer not to say Prefer to Specify
191 103 3 0 0
Race
White Black or African American Latino or Hispanic Asian Native Hawaiian or Pacific Islander Native American or Alaskan Native Multicultural
199 27 16 26 1 1 27
Education Level
High School Graduate Some College Associate’s Degree Bachelor’s Degree Master’s Degree Doctoral Degree
22 46 28 158 41 1
Table 5.9: Study 2b Demographic Information
5.6.2 Experimental Design
Similar to Study 2a, Study 2b leverages two different manipulations. The ex-
perimental design and theoretical underpinnings of these manipulations are discussed
below. Additionally, a summative and graphical representation of these manipula-
tions can be found in Table 5.11.
5.6.2.1 Manipulation 1: AI Teammate Responsibility
Manipulation 1 for this study also concerned the responsibility of the AI team-
mate. However, unlike Study 1, Study 2b presented this responsibility in the form of
multiple shared tasks as opposed to a single shared task. These tasks can be found
in Table 5.10 and this manipulation determined how many tasks were assigned to the
human and how many were assigned to the AI teammate. This manipulation is still
supported by AI research into the levels of autonomy, which can have AI systems
perform more functions as they increase in autonomy [332], but this update also pro-
vides a critical teaming consideration as teammates often have to interdependently
share tasks to complete a shared goal [375].
Similar to Study 1, manipulation 1 utilized a within-subjects design where
each participant was provided with each condition level. Seven condition levels in
total were used where an increase in the condition level resulted in a task being taken
177
from the human teammate and given to the AI teammate. Importantly, humans
were also told they would be responsible for monitoring the performance of the AI
teammate as it would be unrealistic to fully remove the human from the equation
[195], and doing so would mean that this is no longer a human-AI team.
5.6.2.2 Manipulation 2: AI Teammate Endorsement
Manipulation 2 for this study is specifically concerned with the presentation
of an AI teammate’s capabilities. Specifically, this presentation is done through the
concept of endorsements where an aspect of the AI teammate is explicitly endorsed.
This manipulation was a between-subjects design with six condition levels, each of
which has its own theoretical justification.
Condition 0: No Endorsement (Control): A control condition was created
for the endorsement of an AI teammate’s capabilities. This control condition is used
to provide a baseline from which the other endorsements can be compared. This
comparison ensures that the findings of this study can asses if an endorsement is
harmful and which endorsement has the strongest benefit to perception. Additionally,
it is critical to provide a baseline for how humans naturally perceive an AI teammate
sharing responsibility across a variety of tasks. The shorthand labels used to identify
these conditions within the results can be found in Table 5.11.
Condition 1: Coworker Endorsement: Importantly, the existence of external
social influence is an important consideration for the acceptance of technology [452].
As such, coworker endorsements provide a means of providing this social influence
from a peer perspective. Participants in this condition were told that their coworkers
(1) coworkers have accepted the AI into their teams, (2) their teams have increased
178
productivity, and (3) they enjoy working with the AI teammate.
Condition 2: Expert Endorsement: Social influence is not just something
applied by peers as managers or experts could also impact technology acceptance
[452]. However, peer social influence may be different due to the trust humans form
for their peers over experts or managers, meriting the separation of these constructs.
Participants in the expert endorsement condition were told that experts in the field
of AI (1) are helped by the AI teammate, (2) have seen team productivity increase
after accepting the teammate, and (3) the AI has been empirically validated on the
assigned responsibility.
Condition 3: Past Performance Reporting: In addition to endorsements, past
research has also noted that humans should be told performance statistics of AI
teammates to better perception [22]. As such, participants in the past performance
condition were told the AI teammate (1) can complete its assigned responsibility with
a 95% accuracy, (2) can complete its tasks 40% faster than most humans, and (3)
has been trained on real-world software projects. These endorsements communicate
a highly performative AI teammate with a large degree of utility, another critical
consideration for acceptance [106].
Condition 4: Previous Observation: While this research is focused on the
acceptance of AI teammates before interaction, condition four examines the benefit
of having humans observe an AI teammate working in a different team. While this
observation is no replacement for the real interaction, past research has noted that
these observations can help increase human understanding and potential adoption
[22]. Humans in this condition were told (1) they have seen the AI teammate com-
plete the assigned tasks in another team, (2) the humans they observed also enjoyed
179
working with the teammate, and (3) the observed teams’ productivity did increase
after accepting the AI teammate.
Condition 5: Override Finally, condition 5 reassures humans of the control they
will have over their AI teammate. This feeling is critical to technology acceptance
[452], and it could become more important as the level of autonomy and responsibility
of AI increases. As such, participants in this condition were told that they would (1)
have an application that would tell them the actions the AI was planning to make;
(2) be allowed to veto or override an action about to be performed, and (3) eventually
remove the AI teammate after a prolonged period of time if the teammate does not
help the team.
Software Developer Responsibilities
Task # Task Description
1 Checking Code for Spelling Errors and Typos
2 Checking Code for Logic Errors
3 Writing Code Inside Designed Function Blocks
4 Designing Code Based on a Software Development Plan
5 Creating a Software Development Plan Based Off of Written Require-
ments
6 Creating Written Requirements Based on of Client Interviews
7 Interviewing Clients about the Requirements of the Software
8 Overseeing AI Teammate (Not Presented Until Vignette)
Table 5.10: Study 2b: Tasks that need to be completed by software developers and
are assigned to teammates in surveys.
5.6.3 Procedure
Study 2b’s survey followed a fairly similar procedure to that of Study 1, and
these differences are discussed below. Study 2b was also piloted two distinct times.
Both piloting runs led to updates in the design and operationalization of the manip-
180
Manipulation 1: AI Responsibility
(Within)
Operationalized by
# of tasks completed by AI
AI Completes Task 1
AI Completes Tasks 1 & 2
AI Completes Tasks 1, 2, & 3
AI Completes Tasks 1, 2, 3, & 4
AI Completes Tasks 1, 2, 3, 4, & 5
AI Completes Tasks 1, 2, 3, 4, 5, & 6
AI Completes Tasks 1, 2, 3, 4, 5, 6, & 7
Manipulation 2: AI Capability Endorsement
(Between)
Condition Label Description Related Factor
None No bullet point list provided Control
Coworker Endorsed List of endorsements made by
coworkers that focus on how AI
teammates help their team and are
easy to work with
External Social Influ-
ence [452]
Expert Endorsed List of endorsements made by re-
searchers that focus on the im-
provements workplaces have seen
since adopting the AI teammate
External Social Influ-
ence [452]
Past Performance List of technical capabilities and
task performance rates reported on
by developers and creators of the
AI teammate
Capability Communi-
cation [22]
Previously Observed List of observations made first-
hand by participants that focus
on the how helpful and easy-to-
integrate the AI teammate is
First Hand Experience
[22]
Human Override List of endorsed ways that humans
can control the actions and behav-
iors of their AI teammate if they
see fit
Feeling of Control [452]
Table 5.11: Study 2b experimental manipulations. Manipulation 1 varies the number
of tasks completed by the AI teammate, and in turn the human participant. Manip-
ulation 2 varies the endorsement provided to encourage AI teammate adoption. The
descriptions in Manipulation 2 are not the full bullet point list showed to participants.
181
ulations to ensure they were both distinct and visible. This piloting also allowed the
technical validity of the survey to be confirmed.
5.6.3.1 Informed Consent & Pre-Surveys
The informed consent and pre-survey process for Study 2b followed a fairly
identical structure to Study 1. The only notable difference is that the informed
consent letter was updated to reflect the correct time commitment for Study 2b.
5.6.3.2 Introduction of the AI Teammate
Given that the manipulations changed from Study 2a to Study 2b, the in-
troduction of AI also changed. First, the AI was always introduced as a teammate.
Second, participants were told that the AI teammate would be tasked with helping
complete a list of software development tasks. These tasks, which were split amongst
participants and AI teammates based on manipulation 1, were all told to the par-
ticipant as potential tasks both of them could be assigned. This added presentation
allowed participants to have a better understanding of this more complex task.
5.6.3.3 Vignette Structure & Manipulations
The vignettes for Study 2b were fairly similar to that of Study 1. The main
difference was the presentation of the manipulations. First, the AI was always pre-
sented as a teammate in these vignettes. Second, the presentation of responsible,
while still a table, now listed each task assigned to the AI and each task assigned
to the human. This manipulation differed per vignette, each participant received
all seven condition levels, and the ordering of conditions was randomized to reduce
and normalize any potential spillover effects. Finally, at the end of each vignette
before the questions were presented, manipulation 2 was provided. Manipulation 2
182
was provided through a one-sentence endorsement of the AI teammate followed by
three bullet points that further elaborated this statement. Participants were assigned
their condition for manipulation 2 when first opening the survey, and they retained
the same condition for each vignette presented. For an example vignette, please refer
to the visual representation in Table B.21 in the Appendix.
5.6.3.4 Individual Vignette Measures
The questions provided at the end of each vignette within Study 2b mirrored
those provided in Study 1 for participants in the teammate condition. Please refer
back to the prior methods section for a full explanation of each question provided.
183
5.7 Study 2b: Experimental Results
Similar to Study 2a, cumulative link mixed modeling was used to analyze the
results of Study 2b. The responsibility manipulation is represented by a numeric value
of 1-7 in the linear model, for the number of tasks assigned to the AI. Furthermore,
given that Study 2b looks to examine the effects of capability endorsement compared
to a control group, the capability endorsement condition was transformed into a set
of dummy variables. These dummies were added in sets to the linear model, and
thus individual fixed effects analysis was used to discuss the significance of individual
endorsements compared to the control group, an omnibus test is conducted to test
the overall effect of the capability endorsement manipulation before the effects of in-
dividual conditions are assessed. Tables that detail the significance of models created
are provided, and to reduce the complexity of the text, tables that detail the fixed
effects of significantly improved models are also provided instead of listing said effects
in the text. Graphs that display means based on the responsibility and capability
endorsement condition are also provided.
5.7.1 Capability of AI Teammate to Complete Responsibility
For participants’ perceived capability of an AI teammate, the responsibility
assigned to the AI teammate and the interaction between responsibility and capabil-
ity endorsement significantly improved the linear model (Table 5.12). Analysis of the
final model’s fixed effects revealed a significant effect of responsibility on perceived
capability (Table 5.13)—a similar negative trend to that shown in Study 2a. Fixed
effect analysis revealed that no single capability endorsement method significantly im-
proved perceived capability or significantly mitigated the negative effect of increased
responsibility (Table 5.13). Due to the significant interaction effect, the main effects
184
Figure 5.6: Graph of AI capability based on teammate responsibility and capability
endorsement. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Capability (1—pid)
+ AI Responsibility 705.01 1 < .001
+ Capability Endorsement 7.79 5 .168
* + AI Responsibility : Capability Endorsement 18.30 5 .003
*Denotes model used for analysis of fixed effects.
Table 5.12: Linear model for effects of responsibility and capability endorsement on
capability of AI teammate. Each model is built upon and compared to the one listed
above it.
Factor β SE df t p-value Cohen’s d
AI Responsibility -.61 .05 1776 11.34 < .001 .54
Coworker Endorsed .7046 .0.54 291 1.31 .190
Expert Endorsed 1.13 .52 291 2.15 .032 0.25
Past Performance .94 .53 291 1.77 .080
Previously Observed .38 .53 291 .72 .470
Human Override 1.03 .55 291 1.89 .059 0.22
AI Responsibility:Coworker Endorsed .07 .08 1776 0.92 .359
AI Responsibility:Expert Endorsed -.07 .08 1776 .95 .340
AI Responsibility:Past Performance -.0.14 .08 1776 1.80 .072
AI Responsibility:Previously Observed .11 .07 1776 1.47 .141
AI Responsibility:Human Override -.14 .08 1776 1.80 .072
Table 5.13: Table of the Selected Model’s Fixed Effects of Responsibility, Capability
Endorsement Methods, and Interactions on the Perceived Capability of the AI Team-
mate. Effect sizes shown for significant effects and effects that neared significance.
of endorsement were included, and expert endorsements and override capabilities were
shown to have significant and near-significant effects, respectively (Table 5.13. An
185
analysis of the interaction effect revealed that no single effect was significant, but
past performance reporting and override capabilities neared significance (Table 5.13.
These results indicate that expert endorsements and override capabilities are going
to best benefit perceived capability but override capabilities could be less beneficial
as the responsibility of an AI teammate grow.
5.7.2 Potential Helpfulness of AI Teammate
For the perceived helpfulness participants saw in the AI teammate, the respon-
sibility assigned to the AI teammate, and the interaction effect between responsibility
and capability endorsement was significant (Table 5.14). An analysis of the fixed ef-
fects revealed that unlike Study 2a, helpfulness generally demonstrated a downward
linear rather a than a quadratic relationship with AI teammate helpfulness, suggest-
ing that humans may have different preferences when sharing multiple tasks with AI
teammates as opposed to a singular task (Table 5.15). The significant interaction ef-
fects demonstrate that coworker endorsements, previous observations, and the ability
to override the AI teammate all significantly mitigate the negative effect of increas-
ing responsibility, with previous observations having the strongest effect of d = .22,
which almost completely overcomes the negative effect of increased responsibility (Ta-
ble 5.15). This effect shows that AI teammates are perceived as being less helpful
when they are assigned more work, but that AI teammates can be seen as helpful
despite a greater level of responsibility when humans have observations with the AI
teammate.
186
Figure 5.7: Graph of helpfulness of AI based on teammate responsibility and capa-
bility endorsement. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Helpfulness of AI (1—pid)
+ AI Responsibility 126.74 1 < .001
+ Capability Endorsement 7.71 5 .173
* + AI Responsibility : Capability Endorsement 20.42 5 .001
*Denotes model used for analysis of fixed effects.
Table 5.14: Linear model for effects of responsibility and capability endorsement on
helpfulness of AI teammate. Each model is built upon and compared to the one listed
above it.
Factor β SE df t p-value Cohen’s d
AI Responsibility -0.40 .05 1776 7.50 < .001 .36
Coworker Endorsed -.03 .49 291 .06 .955
Expert Endorsed 0.18 .48 291 .38 .704
Past Performance .32 .48 291 .67 .504
Previously Observed -.38 .49 291 .78 .438
Human Override .03 .51 291 .07 .948
AI Responsibility:Coworker Endorsed .19 .08 1776 2.46 0.014 .29
AI Responsibility:Expert Endorsed .12 .07 1776 1.61 .11
AI Responsibility:Past Performance .08 .08 1776 1.10 .270
AI Responsibility:Previously Observed .32 0.08 1776 4.26 < .001 .50
AI Responsibility:Human Override .16 .08 1776 2.01 .044 .24
Table 5.15: Table of the Selected Model’s Fixed Effects of Responsibility, Capability
Endorsement, and Interactions on AI Helpfulness. Effect size only shown for signifi-
cant effects.
187
Figure 5.8: Graph of helpfulness of self based on teammate responsibility and capa-
bility endorsement. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Helpfulness of Self (1—pid)
+ AI Responsibility 806.24 1 < .001
+ Capability Endorsement 8.04 5 .154
* + AI Responsibility : Capability Endorsement 17.44 5 .003
*Denotes model used for analysis of fixed effects.
Table 5.16: Linear model for effects of responsibility and capability endorsement on
helpfulness of one’s self. Each model is built upon and compared to the one listed
above it.
Factor β SE df t p-value Cohen’s d
AI Responsibility -.71 .06 1776 12.83 < .001 0.61
Coworker Endorsed -0.40 .48 291 .82 .410
Expert Endorsed 0.79 .48 291 1.65 .100
Past Performance .20 .48 291 .42 .676
Previously Observed .01 .48 291 .03 .977
Human Override 1.32 .52 291 2.57 .011 .30
AI Responsibility:Coworker Endorsed .20 .08 1776 2.55 0.011 .12
AI Responsibility:Expert Endorsed -.07 .08 1776 0.89 .37
AI Responsibility:Past Performance -.01 .07 1776 .15 .880
AI Responsibility:Previously Observed -.02 0.08 1776 .27 .789
AI Responsibility:Human Override -.12 .08 1776 1.49 .136
Table 5.17: Table of the Selected Model’s Fixed Effect of Responsibility on Helpfulness
of Self.
5.7.3 Helpfulness of Self Alongside AI Teammate
For perceived helpfulness of one’s self, the responsibility assigned to the AI
teammate and the interaction effect between responsibility and capability endorse-
188
ment significantly improved the model (Table 5.16). Analysis of the selected model’s
fixed effects revealed a similar trend to Study 2a, where increases in AI teammate re-
sponsibility results in decreases in the perceived helpfulness of one’s self alongside said
AI teammate (Table 5.17). Additionally, override capabilities had a significant and
positive main effect, and coworker endorsements also had a significant and positive in-
teraction effect. These results indicate that one’s perceived helpfulness of themselves
is largely driven by shared responsibility, but minor benefits can be made to these
perceptions through override capabilities and coworker endorsements. This creates
an interesting challenge, as humans need to feel like they can meaningfully contribute
to their teams even when AI teammates gain increased responsibility—thus, extra
care should be placed on the level of responsibility of an AI teammate so as to not
inadvertently diminish human involvement.
5.7.4 Job Security Concerns Created by AI Teammate
For participants’ perceived job security, the responsibility assigned to the AI
teammate significantly improved the model, but the main effect of endorsement and
the interaction effect neared but did not reach significance (Table 5.18). Analysis of
the selected model’s fixed effects revealed that as the responsibility of the AI teammate
increases, participants’ job security was heavily affected in a negative way (Table
5.19). These results demonstrate that perceived job security is heavily predicated on
the division of responsibility with an AI teammate’s capabilities having very little if
any impact on these perceptions prior to interaction.
189
Figure 5.9: Graph of job security based on teammate responsibility and capability
endorsement. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Job Security (1—pid)
* + AI Responsibility 907.84 1 < .001
+ Capability Endorsement 10.43 5 .063
+ AI Responsibility : Capability Endorsement 10.68 5 .058
*Denotes model used for analysis of fixed effects.
Table 5.18: Linear model for effects of responsibility and capability endorsement on
job security. Each model is built upon and compared to the one listed above it.
Factor β SE df t p-value Cohen’s d
AI Responsibility -.74 .03 1781 27.00 < .001 1.28
Table 5.19: Table of the Selected Model’s Fixed Effects of Responsibility and Capa-
bility Endorsement on Job Security. Effect size only shown for significant effects.
5.7.5 Likelihood to Adopt AI Teammate
For participants’ likelihood to adopt the AI teammate, the workload assigned
to the AI teammate, the capability endorsement of said AI teammate and the in-
teraction between these two features significantly improved the linear model (Table
5.20). Analysis of the final model’s fixed effects revealed a significant negative effect
of increasing the AI’s responsibility on adoption likelihood in the control condition
(Table 5.21), which signals a similar negative trend to that shown in Study 2a. Hav-
ing a prior observation of the AI teammate provided a significant and positive main
effect (Table 5.21). Additionally, coworker endorsements and previous observations
190
Figure 5.10: Graph of likelihood to adopt AI based on teammate responsibility and
capability endorsement. Error bars denote 95% confidence interval.
Model χ
2
df p-value
Adoption Likelihood (1—pid)
+ AI Responsibility 601.96 1 < .001
+ Capability Endorsement 10.81 5 .055
* + AI Responsibility : Capability Endorsement 30.30 5 < .001
*Denotes model used for analysis of fixed effects.
Table 5.20: Linear model for effects of responsibility and capability endorsement on
likelihood to adopt. Each model is built upon and compared to the one listed above
it.
Factor β SE df t p-value Cohen’s d
AI Responsibility -.68 .06 1776 12.35 < .001 .59
Coworker Endorsed .17 .51 291 .33 .755
Expert Endorsed .40 .50 291 .80 .422
Past Performance .99 .50 291 1.97 .049 0.11
Previously Observed -.16 .50 291 .32 .749
Human Override 0.89 .53 291 1.68 .093
AI Responsibility:Coworker Endorsed .22 .08 1776 2.88 .004 .14
AI Responsibility:Expert Endorsed .12 .08 1776 1.61 .107
AI Responsibility:Past Performance -.09 .08 1776 1.16 .245
AI Responsibility:Previously Observed .26 .08 1776 3.40 < .001 .16
AI Responsibility:Human Override .02 .08 1776 .21 .831
Table 5.21: Table of the Selected Model’s Fixed Effects of Responsibility, Capability
Endorsement, and Interactions on Adoption Likelihood. Effect size only shown for
significant effects.
of the AI teammate were able to significantly mitigate (but not fully overcome) the
negative effect of increasing the AI’s responsibility (Table 5.21). Similar to Study
2a, these results show how the likelihood for humans to adopt an AI teammate de-
191
creases as said AI teammate is assigned a greater responsibility. In Study 2b, this
finding is extended from the share of responsibility for a single task, to the number
of responsibilities across a collection of tasks. Additionally, Study 2b shows the crit-
ical importance of coworkers and one’s own previous experience in encouraging the
adoption of AI teammates when they are assigned a greater amount of responsibility.
5.7.6 Summary of Study 2b Results
The results of Study 2b tell a similar story to those of Study 2a: increasing
the responsibility of a prospective AI teammate negatively affects human perception.
Moreover, Study 2b revealed that this effect transcends singular tasks and applies to
the division of workload across multiple tasks as well. However, Study 2b also revealed
that these declining effects can be somewhat mitigated through the endorsement of
an AI teammate’s capabilities. Specifically, coworker endorsements and having a
prior observation of the prospective AI teammate consistently showed the strongest
improvement in the perceptions measured. This finding signals the importance of
having observed AI teammates before bringing them into a team, and that observa-
tions can either come from oneself or from a coworker in a similar setting. However,
the most interesting outcome of this effect is similar to that of Study 2a, where job
security decreases with increased responsibility at a much faster rate than perceived
capability and helpfulness. This suggests that the concerns humans have for their
job security are not coming from a fear that their AI teammate will be so highly
skilled that they themselves become obsolete, but rather from a reflection upon the
(arguably institutionally mandated) role said AI teammate is going to occupy in their
team despite its abilities. This finding creates a critical consideration for researchers
and practitioners of AI teammates to balance both the contribution of an AI team-
192
mate and the impact said contribution could have on human perception. Achieving
this balance between the task-based needs of human-AI teams and the social-based
wants of human teammates is going to be one of the most delicate and important
challenges facing human-AI teams.
193
Model χ
2
df p-value
Helpfulness 1
+ Computing Continuum 8.34 1 0.006
+ Cynical Attitudes 14.06 1 <0.001
Measure Estimate SE t(206) p-value
Computing Continuum 0.04 0.01 3.40 < 0.001
Cynical Attitudes -0.06 0.02 -3.63 < 0.001
Table 5.22: Model Comparisons and Coefficients for AI Helpfulness
5.8 Study 2: Individual Differences Results
Similar to studies 2a and 2b, the individual differences in results will be orga-
nized by the dependent variable. As a note, this analysis will only utilize the data
from study 2a as the lower number of between-subjects conditions means that par-
ticipant experiences were more consistent. Instead of linear mixed effects modeling,
the analysis of individual differences will use standard linear regressions as that is the
common method for individual differences analysis [357, 170]. A stepwise approach
was used where measures were added to the model with each model being compared
to the previous one and significant improvements in the model being kept. Addi-
tionally, participant data were averaged across all seven of their repeated measures
due to standard regressions being more robust for between-subjects data. While this
methodology reduced power, it better targeted the RQ of Study 2, which focuses on
the general susceptibility to AI influence. Finally, given a large number of individ-
ual differences examined, only measures found to significantly relate to dependent
variables will be discussed.
194
Model χ
2
df p-value
Self Benefit 1
+ Computer Efficacy 5.70 1 0.016
Measure Estimate SE t(206) p-value
Computer Efficacy 0.05 0.02 2.43 0.016
Table 5.23: Model Comparisons and Coefficients for One’s Own Perceived Benefit
5.8.1 Helpfulness of AI
For how helpful humans view their prospective AI teammates, their comput-
ing continuum and their cynical attitudes towards AI teammates were significant
predictors. Specifically, participants who perceived computers, in general, to be more
capable also felt that AI teammates would be more helpful. However, participants
that noted more cynical views on AI technology felt that their prospective AI team-
mates would not be as helpful. While the latter effect is somewhat apparent, the
former effect is highly interesting as it demonstrates that participants are potentially
sharing their perceptions between general technology and AI teammates. This finding
is in line with previous work on technology acceptance [452], but it is important and
interesting to identify that this effect remains important for AI teammates.
5.8.2 Benefit of Self
In regards to how beneficial and helpful humans themselves felt they would
be, only their perceived own ability with computers was significant. What is inter-
esting with this effect however is that greater ability with computers one perceives
themselves to have the greater their perceived capability of themselves. This is inter-
esting as it demonstrates that humans may potentially see themselves as being able
to better work around AI teammates when they are more knowledgeable about them.
However, when coupled with the previous effect on AI helpfulness, one can see that
195
Model χ
2
df p-value
Job Security 1
+ Cynical Attitudes 30.94 1 < 0.001
+ Relational FOMO 12.04 1 0.005
+ Affective Identity MTL 9.34 1 0.013
+ Personalized NFP 9.96 1 0.010
Measure Estimate SE t(206) p-value
Cynical Attitudes -0.06 0.02 -3.29 0.001
Relational FOMO -0.03 0.02 -1.88 0.062
Affective Identity MTL 0.03 0.01 3.12 0.002
Personalized NFP -0.04 0.02 -2.59 0.010
Table 5.24: Model Comparisons and Coefficients for Perceived Job Security
humans potentially see how their own tasks could benefit from their AI teammates.
In doing so, they themselves could better benefit from a task as they would not be
spread thin over a large task, in turn increasing their own benefit.
5.8.3 Job Security
Interestingly, perceived job security found the strongest connections to par-
ticipants’ individual differences. Specifically, cynical attitudes, fears of missing out,
motivations to lead, and one’s need for power were able to help predict participants’
views on job security. Among these, the relationships of cynical attitudes are most
obvious, as it would stand to reason that participants more apprehensive about AI
technology would have some job concerns, which is further correlated by other mea-
sures in this study. However, the more interesting aspect of this finding is the other
three measures, which are not significant with other measures in this study. This
difference could simply be the result of job security being a perception much more
strongly associated with workplace perceptions, such as fears of missing out or the
needs for power. Using this result could heavily benefit the design of change promo-
196
Model χ
2
df p-value
Capability 1
+ Conscientiousness 7.09 1 0.024
+ Computing Continuum 6.47 1 0.031
Measure Estimate SE t(206) p-value
Conscientiousness 0.06 0.02 2.38 0.018
Computing Continuum 0.03 0.01 2.18 0.031
Table 5.25: Model Comparisons and Coefficients for Perceived AI Capability
tion material to help those with job security concerns. Specifically, future work could
address individuals’ need for power or desire to lead as means of reducing job secu-
rity concerns surrounding AI teammates, which may be a strategy unique to helping
workplace teams adopt AI teammates.
5.8.4 Capability of AI
When examining participants’ perceived capability of their AI teammate, only
two items were significant predictors, one of which is the recurring item of the comput-
ing continuum. However, the other significant item, which is big 5 conscientiousness,
poses an interesting perspective as this trait commonly points towards how organized
one is. Thus, it seems that the more organized an individual is, the greater their
expectation of an AI teammate’s capability. This measure may have a relation to ca-
pability as humans who are more organized see the mechanical and organized design
of machine systems as a benefit. Essentially, humans in this area may also see AI as
an effective tool due to their computational design, which may ultimately benefit this
perception as demonstrated by Study 2a’s experimental results.
197
Model χ
2
df p-value
Adoption 1
+ Computing Continuum 13.50 1 0.002
+ Cynical Attitudes 14.42 1 0.003
Measure Estimate SE t(206) p-value
Computing Continuum 0.05 0.01 3.64 < 0.001
Cynical Attitudes -0.05 0.02 -3.02 0.003
Table 5.26: Model Comparisons and Coefficients for Adoption
5.8.5 Adoption Likelihood
An analysis of the measure of adoption likelihood showed that two specific indi-
vidual differences, which are cynical attitudes toward AI and participants’ computing
continuum, were found significantly benefit the prediction of participants’ likelihood
to adopt the AI teammate. These measures show a similar story to the results for
previous measures; however, the interesting component here is that the various dif-
ferences from job security do not see a significant presence when examining adoption
likelihood. This suggests that the general susceptibility participants’ had towards
their AI teammates may not have been driven by complex teaming factors, such as
leadership motivations, but rather general computing differences, such as computing
capabilities.
5.8.5.1 Summary of Individual Differences Results
Examining the above results in light of the experimental results of Study 2b,
it is clear that the effects of AI design have a much stronger and more consistent
impact on human perception than their general susceptibility created by individual
differences. However, this does not mean that human susceptibility is an unimportant
facet of the acceptance of AI teammate teaming influence. Rather, these results can
reconfirm how important the perceived utility of AI teammates is. Specifically, partic-
198
ipants’ computing continuum was consistently significant, which would suggest that
the perceived utility of general technology provides a general benefit to AI teammate
adoption, which is in line with past research on technology acceptance [452, 105].
Given these results, the findings of study 2b become critically more important as
they can be used to better increase the perceived capabilities of AI teammates. More
importantly, not only does this mean that the capability endorsements of Study 2b
are important, but other past methods of capability endorsement for general technol-
ogy may similarly apply to AI teammates. This is fantastic news as it means that
the methodologies used to create acceptance technology, which is human-centered
design, can carry forward into the design of AI teammates without there having to be
hesitation towards potentially new individual differences complicating AI teammate
design. Thus, the following discussion of this study focuses on using the experimental
results of study 2 to progress the design of AI teammates to make humans more likely
to accept them.
199
5.9 Discussion
This research’s results demonstrate the spectrum of human perception based
on the presentation of prospective AI teammates. This spectrum details the potential
positives and negatives that have to be considered when designing an AI-teammate
for real-world teams. Furthermore, the current study’s experiments explored if the
presentation of an AI teammate’s capabilities could mitigate these effects to allow
prospective AI teammates to have a greater level of responsibility. Results demon-
strate that the presentation of AI teammate responsibility overwhelmingly affects
human perceptions, and the presentation of AI identity and capability can mitigate
the negative effects of increasing responsibility. In isolation, these effects are critical
to the design of AI teammates as they can provide researchers with an understanding
of how changes to AI teammates’ presentations are going to impact human perception
prior to interaction. However, two key questions that can be made based on these
interpretations need to still be answered: (1) Should the terminology “Teammate”
be used with AI?, and (2) how should designers balance responsibility, capability,
and identity based on these results? The following discussion tackles these questions
while also providing actionable design recommendations for AI teammates based on
these answers.
5.9.1 Balancing Human Teammates’ Needs and Wants for
AI Teammates
The results of this work, especially those in Study 1, show that presenting
an AI teammate in an ideal way is not entirely straightforward. Specifically, when
examining perceptions like job security or capability, AI teammates are negatively
impacted by AI teammates having a level of responsibility that either compares to
200
or surpasses their human teammates. However, if one were to solely look at the
adoption likelihood and perceived helpfulness of the AI, one would conclude that
AI teammates are best perceived when they share similar levels of responsibility
with their human teammates. Thus, one could create different conclusions from this
research if it is not considered holistically. Based on the results of Study 1 and Study
2, two key results need to be holistically considered when designing AI teammates to
be human-centered: (1) adoption likelihood and perceived helpfulness follow different
trends than other perceptions in Study 1; and (2) adoption likelihood and perceived
helpfulness follow different trends between Study 1 and Study 2. These two results
along with how researchers and designers should use these results to design better AI
teammates are discussed in more detail below.
For result (1), as mentioned above, different conclusions can be made if one
looks at either adoption likelihood or job security in isolation from each other. These
findings show a trend that exists within HCI, which is that the potential needs and
wants of humans may not entirely align [327, 233]. For instance, for the sake of job
security, humans may want AI teammates to have minimal roles, but humans recog-
nize that they need an AI teammate that has a fair amount of responsibility for it
to be helpful. Thus, researchers and designers would be at an impasse of whether to
design AI teammates based on wants or needs, but past research points to the idea
that the solution is to balance the two and not entirely design for one or the other
[360]. Thus, based on the results of this research, an initial balance could be achieved
by having AI teammates be responsible for a little less than their human teammates,
which would somewhat benefit adoption likelihood while not overtly harming per-
ceived job security. However, the results regarding presentation in Study 1 and Study
2 would suggest that this balance is not one-dimensional as other factors can influence
human perception. For instance, if an AI teammate was able to receive a coworker
201
endorsement, which then significantly benefited the human teammate’s perceived job
security, then it could be assigned a greater level of responsibility while maintaining
a similar perceived job security.
Result (2) also expands on this idea of balance in that a potential balance
between human wants and needs may look different based on the context. For in-
stance, the ideal balance based on Study 1’s data would position AI teammates to
have relatively similar levels of responsibility as their human teammates, but adoption
likelihood had a linear trend in Study 2, which matched the other trends presented,
including that of job security. Thus, when AI teammates share tasks with humans,
it would be preferred for that AI teammate to have a relatively lower level of re-
sponsibility compared to their human teammate. This result points to the conclusion
that contextual differences may also be a factor to consider when balancing the needs
and wants of AI teammates. However, this implication poses a large challenge to
human-AI team designers as these teams are slated to not only exist in software
development domains [460, 456], but also other domains including medical [478] or
military domains [285]. Thus, the results of this research should not be the only
empirical findings considered when determining how to best balance the needs and
wants of prospective AI teammates, and future work needs to continue this work in
other contexts.
Finally, it is important to note that the balance researchers determine now
for prospective AI teammates may not be the definite, because human perceptions
and preferences for technology inevitably change as their experience with technology
grows [452, 189]. Thus, the assigned responsibility and even capability presentation
of AI teammates preferred in this moment would likely change as AI teammates
naturally gain more notoriety and understanding in the workforce. Two key findings
of this work speak to this potential for the needle to move with time: (1) the finding
202
that job security declines despite perceived capability also declining, and (2) the
finding that the effect sizes of responsibility are demonstrably larger than those for
any capability presentation method. These two findings denote that increasing the
perceived job security or even the adoption likelihood of AI teammates in a more
demonstrable way is not actually tied to perceptions of its capability. Even an AI
teammate that humans completely doubt is going to succeed is still a worry, and we
should not rush humans into overcoming said worry. Rather, using the results of this
study, that concern should be accounted for in design, resulting in AI teammates
that do not do what they can do but rather what they should do to ensure that the
negative perceptions humans form do not prevent them from seeing the positives of
the technology. Thus, over time, humans would gradually gain more familiarity and
comfort with their AI teammates, in turn raising the individual perceptions examined
by this research.
5.9.2 Evaluating the Viability of the Term “AI Teammate”
The actual terminology of “AI teammate” has been called into question re-
cently as some believe it could negatively impact AI technology [402]. However, work
in human-AI teaming is rapidly progressing under the notion that AI teammates pro-
vide unique advantages to teams over simple AI tools [329]. This study provided the
first empirical exploration of how using the terms “teammate” and “tool” can directly
impact the perceptions humans to develop for their AI collaborator. Importantly, the
results of this study show that AI having a tool identity does not universally improve
human perception, rather the benefits of both teammate and tool identification are
somewhat nuanced. For instance, the results of this study also demonstrate that AI
teammates are not solely evaluated on their label but also on other various aspects,
203
such as their assigned responsibility. Moreover, presenting an AI as a tool is not
the only way to benefit human perception, as AI teammates can benefit from vari-
ous other presentations surrounding their capabilities. For instance, when looking at
perceived job security, presenting AI identities as tools can be beneficial, but so too
can coworker recommendations for AI teammates. Given this and other results from
Study 2, AI teammates can be highly perceived by humans when their capabilities
and responsibilities are intelligently presented to humans, which means both identifi-
cations would be beneficial based on how various other components of AI teammates
are presented. Moreover, forgoing a teammate presentation may not be entirely ideal
when one considers the potential benefit of AI teammates as opposed to tools.
While this study only explores the benefits AI could provide to sharing task
responsibilities, various other AI teammate components will contribute to the benefit
of human-AI teams. For instance, AI teammates could and should benefit team fac-
tors such as awareness [123, 100] or trust [335, 454], and humans want their ideal AI
teammates to benefit these factors [480]. However, the results of this work show that
despite humans wanting AI to have teammate capabilities, AI being a teammate, as
opposed to a tool, can negatively impact other perceptions relevant to AI teammates,
such as job security. Thus, it can be concluded that presenting AI as teammates and
benefiting team outcomes, does not result in entirely positive boons to perception.
Moving forward with this information, researchers should ask if these potential neg-
ative impacts of the teammate label are outweighed by the benefits to these other
teaming factors. While this research did not explore these others factors, past lit-
erature can be considered in concert with this research’s results to make an initial
determination. As one example, AI teammates can uniquely benefit factors like coor-
dination through leadership capabilities [142], and these coordination benefits can in
turn benefit human perception and interaction with AI teammates [109, 75]. While
204
this is but one example, it shows that having an AI be a teammate can also improve
human perception. Thus, the initial conclusion that can be created by this article is
that if an AI can benefit critical teaming factors, then it can in fact offset the initial,
negative impact that presenting an AI as a teammate can have.
However, this is not to say that every single future implementation of AI should
strive to be an AI teammate or call itself an AI teammate. As such, research should
continue to examine this topic with the understanding that AI teammates do have
stronger expectations for teammates over tools [402], and this study shows that those
expectations impact perception. Specifically, while study 1 simply manipulated the
presented identity of an AI teammate, the actual capabilities of AI are also bound
to vary. For instance, based on the trajectory of current research, future AI systems
will be more capable of benefiting specific factors that benefit teamwork, such as
those mentioned above [329]. As these advancements happen, the appropriateness
and acceptance of the teammate label by actual users may increase. Thus, future
research should continue to pursue two objectives in evaluating the identity that is
AI teammates. First, future research should continue to design and implement AI
teammates that explicitly benefit teaming functions with the goal of creating AI
that earn the “teammate” identity. Second, research should continuously evaluate
the acceptance of the teammate identity in users as AI progresses to understand if
and when AI should be called “teammates”. In following these two initiatives, not
only would perceptions identified in this work be beneficial, but future AI as tools or
teammates may better benefit teams.
205
5.9.3 Design Recommendations
The results of this study show that the responsibility AI teammates have in
teams is going to have a demonstrable impact on human perception, and changes
to an AI teammates presentation can also benefit these perceptions as well. Thus,
researchers and designers of AI teammates need to work intelligently to craft these
roles and presentations to ensure human compatibility. However, these designs should
not entirely center around designing the AI teammate, but also the human-AI team
and the role of human teammates. The following are multiple design recommendations
that will benefit the perception of AI teammates based on the results of this study,
and these design recommendations are not limited to simple changes isolated to AI
design but also the ecosystem surrounding AI teammates as well.
5.9.3.1 Early AI teammates should be responsible for small amounts of
existing tasks.
Regardless of whether one examines the results of Study 1 or Study 2, it is clear
that humans do not want AI to hold the majority of responsibility within a team.
Within Study 1, it is seen that humans ideally want to evenly split responsibility
with AI teammates, and Study 2 showed that each AI task solely assigned to an AI
teammate negatively impacts perception. Thus, when AI teammates are designed to
complete a task, they should not be designed to complete all of a task but rather
share it with humans. While previous research notes the importance of humans
sharing task responsibilities with AI teammates [150, 252], the results of this study
show how granular that sharing needs to be, with it happening on the per task basis.
Moreover, this research also provides designers and researchers with a recom-
mended amount of responsibility that should be shared with AI teammates, with this
206
work suggesting that initial AI teammates be assigned relatively less responsibility
than their human teammates. This effort may require some additional efforts by
teams to better divide individual tasks into sub-tasks that will be completed by AI
teammates and human teammates. Once again, while past research has noted that
new tasks and roles for humans will need to be created in light of AI’s integration
[379, 116], this research and design recommendation posit that this division needs to
happen on existing, not future, workloads so humans are assured they have will have
the majority of responsibility before AI teammates are integrated.
5.9.3.2 Workplace demonstration events for AI teammates should be held
regularly.
The two presentation methods that consistently, and significantly, improved
human perception were the presentation of a coworker endorsement and the presen-
tation of having first-hand past experience. Importantly, past research has identified
just how important these two presentations are to technology perception, with general
technology benefiting from coworker endorsements and past experience with technol-
ogy [451]. However, the results of this research show the specific importance of these
presentations to AI teammates, and how each one can benefit different perceptions
humans have that could impact the adoption of AI teammates. While this research
did not examine the combination of capability endorsements, the repeated significance
of these effects and the lack of negative effects denotes that the combination of these
endorsements would most likely not negate the benefits of one another. Additionally,
the importance of coworker endorsements also means that these events need to be
hosted not by technology experts or even managers but rather coworkers who work
in highly similar roles to the attending audience.
The goal of these events would be to first provide humans with multiple
207
coworker endorsements, but if said endorsements are not enough for some individ-
uals then these events would also provide the opportunity to showcase AI teammate
capability and provide humans with first-hand experience. The combination of these
two factors through this singular event would provide a broader benefit to human
perception where multiple concerns that human teammates may have can be allevi-
ated. A somewhat similar strategy was used in the past to encourage the adoption
of automation in aircraft systems [267, 266], and these efforts were into specific and
recurring events specific to AI teammates. These events would also provide oppor-
tunities to provide expert endorsements and demonstrate potential human overrides
and control, which could also have minor added benefits to perception given the re-
sults of this research. In doing so, perceptions across multiple humans teammates
would collectively benefit from attending these internally hosted events.
5.9.3.3 AI tools should not be called AI teammates.
Based on the results of this study, identifying AI as either tools or teammates
can have significant impacts on perception. However, as AI advances, its ability to
directly contribute to team factors, such as trust, shared understanding, and even
communication, are going to increase. Therefore, the label that is “teammate” is not
something that should be used haphazardly. Importantly, study 1, which saw that
teammate identity harms perception, did not communicate to users the difference
in function or benefit from AI tools to AI teammates. This design was intentional
as the natural assumptions humans make about AI teammates were the priority of
this article. However, the effects of study 1 demonstrate that the teammate label
potentially needs to be justified, as evidenced by study 2, which saw AI teammate
perception increase due to capability endorsement.
Thus, researchers and practitioners of AI teammates need to take extra care
208
when determining the role and capabilities AI teammates and humans play in a team,
and how said role and capabilities determine whether or not an AI is a tool or a team-
mate. But at its core, this is a decision that has to be made for all human-centered
technology [154, 307] and all teammates [122, 78]. Thus, a smaller, negative effect
should in fact not dissuade the use of the teammate label but rather highlight the
critical importance of holistically considering the needs and wants of the human-AI
team as a whole and the individual human teammates. While this balance is a chal-
lenge, as shown by this study’s results, it is a challenge that researchers in human-AI
teaming are well equipped and willing to handle. Past work has already begun work-
ing to identify how the capabilities of AI teammates have to be intelligently designed
not for maximum performance but rather maximum compatibility [42]. These efforts
should continue under the umbrella of human-AI teaming, and AI technology should
continue to advance in a way that will directly benefit what human teammates want
from AI technology.
5.9.4 Limitations and Future Work
Given that this work is foundational in its exploration of human preference for
workload assignment, there are still limitations that future research needs to tackle
to improve human-AI teams. Specifically, this study is limited in its population, con-
text, and online design. First, this study limited itself to a US population, which
while necessary for this initial exploration, does provide some cultural limitations to
this work. Future work should directly examine how potential cultural differences
can and will impact AI teammate acceptance. Second, the software development
context, while extremely timely and relevant, did limit our participant pool to more
technologically oriented participants. These participants may have been more or less
209
skeptical towards AI due to a greater potential understanding of the technology than
general populations. Future work should more explicitly examine the acceptance of
AI teammates in a variety of other contexts that have more or less technological in-
fluence. Finally, the online design of this study provides inherent limitations. Mostly,
participants were tasked with imagining their AI teammates based on the informa-
tion told to them. As such, the effects of this article, and especially study 2, may
differ when humans are actually presented with a real AI teammate that will actively
share responsibility with them. Future work should continue to examine research
surrounding these limitations to ensure the acceptance of AI teammates.
210
Chapter 6
Study 3: Understanding the
Creation of AI Teammate Social
Influence in Multi-Human Teams
6.1 Overview
While Study 1 explored the relationship between human teammates and AI
teammate social influence, there are still a variety of ways social influence can impact
human-AI teaming. Specifically, human-AI teams are not guaranteed to either be
dyads or teams with only one human, meaning humans will not just experience the
teaming influence of AI teammates but also other human teammates. Thus, research
will not be able to fully understand how AI teammate teaming influence becomes
social influence if it is not understood how humans adapt to teaming influence with
multiple sources exist. Thus, this study builds upon Study 1 by exploring how the so-
cial influence that stems from teaming influence coming from an AI teammate changes
form when competing sources of teaming influence exist. In other words, this study
211
determines whether humans allow AI teammate teaming influence to become social
influence simply because it is the only teaming influence they experience or if this
process will happen in teams with multiple sources of teaming influence. Specifically,
this work examines interdependence to understand how human-human and human-AI
interdependence changes in light of human-human and human-AI teaming influence
existing simultaneously. Additionally, while Study 1 examined singular AI teammates
increasing the amount of teaming influence they impose, AI teammate teaming influ-
ence can increase in other ways. For instance, as AI systems become more capable,
the literal amount of them in society is going to increase in tandem, and in turn, the
number of AI teammates will similarly increase within individual teams. Thus, this
study observes the effects of multiple humans and multiple AI teammates operating
within the same human-AI team.
Importantly, this work provides critical insights regarding three key concepts:
(1) how the strength of human-human and AI-human teaming influence can compete
with each other; (2) if the transition from AI teammate teaming to social influence
can occur when multiple humans exist in a team; and (3) if increasing the number
of AI teammates, in turn increasing teaming influence, impacts the creation of social
influence. For (1), AI teammate teaming influence was previously observed in dyads
in Study 1, but human teaming influence may actually weaken the existence of AI
teammates’ teaming influence in teams. For (2), humans are most likely going to
have a preference for teaming influence, and the source of that teaming influence (i.e.
human or AI) may be a factor considered when forming this preference and whether
said teaming influence can become social influence. For (3), AI teammates are most
likely going to increase in numbers as their prevalence grows, and we need to know
if and how increasing the number of teammates within a team can increase their
teaming influence, and in turn, impact their social influence.
212
Given the above concepts and motivations of this research, the following re-
search questions, which answer dissertation RQ4 and RQ2, are the focus of this study:
RQ4.1 Does the presence of human-human teaming influence prevent AI teaming in-
fluence from becoming social influence?
RQ2.3 How do variations in the amount of AI and human teaming influence interact
with each other?
The above research questions in addition to the research questions answered
by Study 1 provide a holistic answer for how humans allow AI teammate teaming
influence to become social influence. Future research will heavily benefit from this
understanding as the concept of social influence would be a concern for any research
that views humans and AI systems interacting with each other. Moreover, the answers
to these studies will help understand the multitude of ways in which the progress of
AI technology is going to impact the actual humans the technology is designed to
benefit.
6.2 Methods
6.2.1 Experimental Design
Based on the above research questions along with the themes of this disser-
tation, Study 3 focused on two key manipulations: (1) the number of AI teammates
imposing teaming influence in a human-AI team, and (2) the level of the existing rela-
tionship between humans in a human-human team, which would, in turn, manipulate
humans prior experience with their human teammates teaming influence. Specifi-
cally, these two manipulations provided a means of understanding how varying levels
213
of both AI teaming influence and human teaming influence - as operationalized by
the number of AI teammates and prior working experiences with human teammates
respectively - impact both human-human and human-AI interdependence. Further-
more, the conditions utilized a mixed-effects design that allowed participant percep-
tions of each number of AI teammates to be gathered and collected (Shown in Table
6.1. Moreover, this design provided a greater number of repeated measures for the
between-subjects manipulation.
Condition 1: Number of AI Teammates (Within)
3 AI Teammates : 2 human teammates (High Teaming Influence)
2 AI Teammates : 2 human teammates (Medium Teaming Influence)
1 AI Teammate : 2 human teammates (Low Teaming Influence)
Condition 2: Prior Existence of Human-Human Experience (Between)
Humans Play a Solo-Training Game (Low Existing Experience)
Humans Play a Dyad Training Game (High Existing Experience)
Study 3: Study Design Matrix
Number of AI Teammates (Within)
Solo/Dyad
Training
(Between)
1 AI Teammate + Solo
Training
1 AI Teammate +
Dyad Training
2 AI Teammates +
Solo Training
2 AI Teammates +
Dyad Training
3 AI Teammates +
Solo Training
3 AI Teammates +
Dyad Training
Table 6.1: Study 3 2x3 experimental design.
6.2.2 Task
The experimenter waited for both participants to arrive before beginning in-
structions and informed consent. Upon entering the experiment space, participants
agreed to an informed consent and took multiple pre-surveys. Participants were pro-
214
vided an Xbox controller to make it easier to play for experienced and beginner
players. Afterward, they underwent a tutorial, which included a basic button tutorial
with the controller and a free-play session where they could get more comfortable
with the gamepad and the game.
Then, participants played two warm-up games of Rocket League. Based on
the between-subjects condition, participants either played these warm-up games to-
gether or alone. Each warm-up game contained no AI teammates and a single AI
goalie. The use of a single AI goalie continued throughout all of the games played.
This modification slowed down the game and allowed participants to distinguish the
perceptions they formed of their AI teammates from their opponents.
After their warm-up games, participants played 3 games of Rocket League,
each with a different number of AI teammates. Regardless of their between-subjects
condition, participants played all 3 of their games together, meaning there were al-
ways 2 human teammates on each team. Moreover, while Study 1 conceals the ma-
nipulation behind random Greek letters, this was not possible given that multiple
teammates were changed between games. Thus, the data collection of this study
focused more on peoples’ reactions to apparent population differences as opposed to
concealed behavioral changes.
Each game consisted of a similar gameplay cycle, which included kick-off,
passing/moving the ball, and shooting/scoring. In the event that a team scored, the
game reset to kick-off and the cycle started over again. If the goalies blocked the
shot, then the team either went back to moving the ball or tried to score again. This
gameplay cycle presented multiple opportunities for teammates to exert teamwork
and become interdependent. Specifically, teammates could make selections on who
should perform the kick-off and try to kick the ball toward the goal. During the
ball-moving phase, teams could become interdependent by either dividing the field
215
into zones or each taking specific roles, such as defense or offense. Finally, the scoring
component had a great deal of interdependent opportunity as teammates could either
take the role of shooter, back-up in case of a miss or block, or as teammate that pushes
the goalies out of the way. Each of these components of the gameplay cycle occurs
in every game and they all provide opportunities to build, learn, and exert teamwork
and interdependence.
After each game, participants were tasked with taking post-task surveys, and
after all three games were finished they completed a short 10-15 minute, focus group
style interview. The ordering of the number of AI teammates was randomized on a
per two-participant-team basis. Participants were told they would be working with
AI teammates and each other, but they were not told any performative details about
their teammates.
6.2.3 Participants
Teaming studies are commonly difficult to recruit for due to the requirement of
having multiple humans present for a single experiment slot. However, the goals of this
study could not be achieved without two humans being collocated. Importantly, in
addition to human-AI dynamics being observable, this inclusion also allowed human-
human dynamics to be observed within a human-AI team, which is critical to the
applicability of this research. Unfortunately, the utilization of two human participants
also made it much more difficult to reach statistical power. Given that Study 1 and
Study 2 were able to meet power, it was determined that achieving power in Study
3 would not be the priority given the scope of the contribution. Commonly, teaming
studies aim for getting 10 teams per condition, but this study was able to achieve
16 teams per condition, for a total of 32 teams and 64 human participants. Full
216
demographic information can be found in table 6.2. The average participant age
was 18.34 (SD = 0.65). Participants were recruited through a university psychology
recruitment pool, and they received seven class credits for participation in the study,
which took around 1 hour and 45 minutes.
Gender
Male Female Non-Binary Prefer not to say Prefer to Specify
15 48 1 0 0
Race
White Black or African American Latino or Hispanic Asian Multicultural
51 1 5 3 4
Education Level
High School Graduate Some College Associate’s Degree
41 21 2
Table 6.2: Study 3 Demographic Information
6.2.4 Measurements
Since this study utilized a mixed-methods design, both traditional quantita-
tive measurements as well as interviews were used to collect participant data. The
data collection process can be broken up into four different sections: pre-task ques-
tionnaires, task-derived measurements, post-task questionnaires, and post-task inter-
views. Each of these components and the combination of them were critical to answer-
ing this study’s research questions. The majority of quantitative data collected for
this experiment consisted of similar measures to Study 1; however, this study placed
an emphasis on interdependence measures as they can help provide key behavioral
and interaction differences between teammates. Perceptual and behavioral measures
were also gathered for both human and AI teammates to facilitate comparison. While
the AI teammates and the human teammates in this study did not behave the same,
this comparison provides an accurate understanding of real-world human-AI teams
217
that will not employ AI teammates that mimic human teammates [329]. Addition-
ally, the interviews conducted more heavily focused on the interdependence humans
created with each other and their AI teammates compared to Study 1, which provided
an understanding of the adaptation process between humans and AI teammates. The
following survey measures can be found in Appendix A.
6.2.4.1 Pre-Task Questionnaires
Demographics Pre-Task questionnaires targeted participants’ prior experiences to
understand perceptions formed before the study impacted their perception of AI
teammate and human teammate teaming influence. During this step, demographic
information for participants was also collected, including age, gender, and education
level. In addition to standard demographic information, participants were also asked
about their prior experience with Rocket League as this experience may indicate vary-
ing skill levels and could impact participants’ perceptions regarding AI teammate and
human teammate teaming influence. Additionally, data on whether or not partici-
pants had a prior relationship with their human teammate was collected to be used
as a potential control variable if any significant relationships were found.
6.2.4.2 Task-Derived Measurements
Scoring data was derived from each of the games played by the participants.
Individual teammates were rewarded with points for defending their goal, taking shots
on the goal, scoring points, and handling the ball efficiently. This data is important
as it allows insights to be created around how increases in AI social influence may
impact human, AI, and team performance, either positively or negatively. This data
is displayed at the end of each game, and the experimenter recorded the individual
scores for each teammate (AI and human) along with the team’s overall score.
218
6.2.4.3 Post-Task Questionnaires
Since this study used a mixed-effects design that uses both within and between
subjects conditions, post-task surveys were provided to each participant a total of
three times, once after each game. Thus, each participant provided a measure for
each number of AI teammates while also providing three measures for their between-
subject condition of practicing with or without their human teammate.
Perceived Teammate Performance Participants were asked about their percep-
tions regarding the performance of their human teammates and their AI teammates.
Questions were kept separate for human and AI teammate perceptions, such that each
participant answered the same questions twice, once about their human teammate
and once about their AI teammate(s). Perceived teammate performance was mea-
sured using twelve, five-point Likert scale questions that centered around the ability
of a teammate to accomplish their assigned task and effectively operate in team in-
teraction [101]. Scores were summed, with higher scores denoting that a participant
perceived their AI/human teammate as having a better performance.
Human-Machine-Interaction-Interdependence The most important measure
within this study is that of perceived interdependence, which is a survey scale created
between the completion of Study 1 and the start of Study 3. As previously mentioned,
the examination of teammate relationships within this experiment focused on the in-
terdependence created between humans and AI. Thus, the interdependence humans
perceived was collected for both human and AI teammates. Questions were kept sep-
arate for human and AI teammate perceptions, such that each participant answered
the same questions twice, once about their human teammate and once about their AI
teammate(s). Importantly, this survey contains multiple sub-scales that all represent
219
Subscale Description
Perceived Mutual
Dependence Between
Teammates
Measures how much a human feels they and their team-
mate mutually depend on each other to complete a shared
task.
Perceived Conflict Be-
tween Teammates
The degree of conflict humans perceived between them-
selves and their teammate when completing a shared task.
Perceived Power Com-
pared to Teammate
How much power a human felt they had in their relation-
ship with their teammate. A good representation of one’s
own perceived social influence.
Future Interdepen-
dence - Teammate to
Self
How much humans would expect their teammate to be
interdependent with themselves in a future interaction.
Future Interdepen-
dence - Self to Team-
mate
How much humans would expect themselves to be inter-
dependent with their teammate in a future interaction.
Information Certainty
- Teammate to Self
The degree to which humans understand how their team-
mates’ actions affect their own perceptions and actions.
A somewhat low level proxy for team cognition.
Information Certainty
- Self to Teammate
The degree to which humans understand how their own
actions affect their teammates’ perceptions and actions.
A somewhat low level proxy for team cognition.
Table 6.3: Human-Machine-Interaction-Interdependence Subscales
unique components of interdependence that AI teammate teaming influence could im-
pact, and the analysis of this study examines each of these subscales independently
[464]. Thus, the results of this study are heavily geared towards the interdependence
humans formed, which should provide humans’ perspectives of how AI teammates im-
pacted the dynamics between AI teammates and other human teammates. Moreover,
this metric was also specially designed for human-machine systems which makes it a
more appropriate measurement for this study. Importantly, while this scale is geared
towards machine systems, it was completed for human teammates too to ensure that
human and AI perceptions were evaluated on the same scale. For a list of the specific
sub-scales incorporated in this scale, please read the following table, Table 6.3.
220
AI Teammate(s) Acceptance While the focus of Study 3 was to link teaming
and social influence, the acceptance of AI teammates is important to this disserta-
tion, and it is also important to quantitatively measure the acceptance of the AI
teammates participants worked with. Given that AI teammates are in their infancy,
the measurement of their acceptance is not entirely exact and has thus been adapted
from existing technology acceptance measures. The most generalizable and applicable
measurement of acceptance utilizes multiple Likert scales to rate the perceived qual-
ities of technology, such as utility, desirableness, and efficiency [447]. Higher scores
denote a higher degree of acceptance of the AI teammate. This measurement was
used only to measure the acceptance of the AI teammate(s), not the human team-
mate, as (1) this measure is only intended to evaluate the acceptance of a technology,
not people; and (2) this dissertation is primarily concerned with the acceptance of
AI teammates, not human teammates or AI teammate acceptance in comparison to
human teammates.
Perceived Workload Participants were also asked about the overall workload in
completing the final game of the task. Workload was measured using the NASA Task
Load Index (TLX), which consists of six, twenty-one-point scale questions that ask
participants about mental workload, success, pacing, and other factors that contribute
to the overall workload and effort required to complete a task [172]. Higher scores
denote that a participant perceived a greater workload when completing the task.
6.2.4.4 Post-Task Interview
Instead of focusing on the individual relationship between human and AI team-
mates, Study 3’s post-task interview was conducted with both participants in a focus
group style. Interviews focused on the relationships both humans were able to form
221
with each other along with the perceptions formed for AI teammates. Additionally, a
large portion of the post-task interviews centered around the behavioral modifications
humans made and the awareness humans formed for their AI and other human team-
mates. This data provided high-fidelity insights into the interdependence humans
had with both their human partner and AI teammates, which provides a behavioral
component to be perception data also collected. Thus, the interdependent relation-
ships created between humans, other humans, and AI teammates were observed from
both an individual, perception perspective and a holistic recounting of adaptation
throughout multiple games. This methodology also allowed the collection of high-
fidelity data that may not have been capable by general survey metrics, especially
given the novelty of AI teammates.
222
6.3 Study 3: Quantitative Results
For quantitative results, the two manipulation effects of the number of AI
teammates and participants’ training experience with their human teammates will
be examined. However, since participants completed the majority of surveys for
both their human and AI teammate, an additional effect will be included, which is
the difference between perceptions for either human or AI teammates, which will be
referred to as the effect of “teammate identity”. While the behavior of human and
AI teammates were not always guaranteed to be similar, the inclusion of this effect
provides critical insight on how AI influence differently impacts human-human and
human-AI relationships in an applied and realistic context that does not use human
confederates. As such, the following analysis will use a repeated measure ANOVA
(RMANOVA) that examines one between factor (Training Conditions) and two sets
of repeated measures (AI count x Human or AI Perception). For main effects, Holm
posthoc tests will be used due to their robustness with RMANOVAs.
6.3.1 Task Performance
Starting off with the only objective measure examined, the main effects of
teammate identity (F(1,30) = 90.93, p < 0.001, η
2
= 0.40) and AI count (F(1.82,54.60)
= 21.76, p < 0.001, η
2
= 0.07) were found to be significant, while the main effect of
training type (F(1,30) = 0.67, p = 0.418, η
2
= 0.00) was found to be insignificant.
However, the interaction effect between teammate identity and AI count was also
found to be significant (F(1.68,50.47) = 49.54, p < 0.001, η
2
= 0.12).
An analysis of the simple main effects revealed that the effect of AI count was
significant for both the performance of AI teammates (F(2,30) = 39.08, p < 0.001) and
human teammates (F(2,30) = 4.56, p = 0.014). A posthoc analysis of this effect on
223
Figure 6.1: Figure of task performance based on the number of AI teammates and
whether or not the perception is towards the human or AI teammate. Error bars
denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Human or AI Marginal Mean Lower Upper SE
1 AI 460.50 362.95 558.05 49.32
2 753.00 655.45 850.55 49.32
3 1075.22 977.67 1172.77 49.32
1 Human 267.19 169.64 364.74 49.32
2 224.03 126.48 321.58 49.32
3 179.34 81.79 276.89 49.32
Table 6.4: Marginal means for the effects of AI count and teammate identity on
perceived performance.
AI teammate performance revealed that total AI teammate performance significantly
increased from one to two teammates (t(30) = 5.49, p
holm
< 0.001) and two to three
teammates (t(30) = 6.04, p
holm
< 0.001). For the effect of AI count on human
performance, the posthoc analysis revealed that performance did not significantly
decrease from any level to another, even the level of one AI teammate to three AI
teammates (t(30) = 1.65, p
holm
= 0.306).
Additionally, an analysis of simple main effects also revealed that human and
AI teammates scored significantly differently when having one AI teammate (F(1,30)
= 9.94, p = 0.004), two AI teammates (F(1,30) = 78.58, p < 0.001), and three AI
224
teammates (F(1,30) = 110.51, p < 0.001). Specifically, posthoc analysis revealed
that AI teammates outperformed human teammates in total when there was one AI
teammate (t(30) = 2.77, p
holm
= 0.029), two AI teammates (t(30) = 7.58, p
holm
<
0.001), and three AI teammates (t(30) = 12.85, p
holm
< 0.001).
These effects signify that increasing the number of AI teammates heavily in-
creased the total score of said teammates, which is to be expected; however, the
addition of these teammates can create a small, significant impact on human perfor-
mance, but those impacts are hard to quantify given the insignificant posthoc analysis.
Given these effects, it is somewhat clear that the number of AI teammates is not al-
ways a largely significant influence on the performance of human teammates, but it
can cause an increasingly large gap between human and AI teammate performance.
6.3.2 Perceived Performance
For the perceived performance of teammates, the effect of teammate identity
(F(1,30) = 11.96, p < 0.001, η
2
= 0.11) was significant, and the main effects of AI
count (F(1.75,52.60) = 2.33, p = 0.114, η
2
= 0.01) and training condition F(1,30) =
0.47, p = 0.498, η
2
= 0.01) were insignificant. However, the interaction effect between
teammate identity and AI count was significant (F(1.86,55.69) = 9.55, p < 0.001, η
2
= 0.04).
An analysis of the simple main effects revealed that the effect of AI count
was significant for the perceived performance of AI teammates (F(2,30) = 8.10, p
< 0.001) but not for human teammates (F(2,30) = 2.40, p = 0.099). The posthoc
analysis on the effect of AI perceived performance revealed that AI performance only
significantly increased from one to three AI teammates (t(30) = 4.60, p
holm
< 0.001).
Perceived differences between human and AI teammates were not found to be signifi-
225
Figure 6.2: Figure of perceived performance based on the number of AI teammates
and whether or not the perception is towards the human or AI teammate. Error bars
denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Human or AI Marginal Mean Lower Upper SE
1 AI 18.55 17.38 19.71 0.57
2 19.97 18.68 21.26 0.63
3 21.09 20.02 22.17 0.53
1 Human 17.92 16.73 19.11 0.58
2 17.11 15.72 18.50 0.68
3 17.00 15.52 18.48 0.73
Table 6.5: Marginal means for the effects of AI count and teammate identity on
perceived performance.
cant when there was only one AI teammate (F(1,30) = 0.95, p = 0.34), but they were
significant when there were two AI teammates (F(1,30) = 9.49, p = 0.004) and three
AI teammates (F(1,30) = 17.21, p < 0.001). Specifically, perceived AI teammate
performance was higher than perceived human performance when participants inter-
acted with two AI teammates (t(30) = 3.30, p
holm
= 0.017) and three AI teammates
(t(30) = 4.73, p
holm
< 0.001).
These effects generally signify that the effect of AI count on perceived perfor-
mance was similar to that of task performance, where it had a large, positive impact
on the perception of AI teammates. However, the most interesting difference between
226
this and the previous effect is that perceptions participants formed were generally
consistent between human and AI teammates when there is only one AI teammate,
and the gap in perception was only created as more AI teammates were added to a
team. This effect would suggest that humans tend to see stronger differences between
their AI and human teammates as the number of AI teammates and their teaming
influence increases.
Figure 6.3: Figure of perceived mutual dependence based on the number of AI team-
mates and whether or not the perception is towards the human or AI teammate.
Error bars denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Marginal Mean Lower Upper SE
1 23.59 22.73 24.46 0.43
2 22.21 21.10 23.32 0.54
3 22.43 21.23 23.63 0.59
95% CI for Mean Difference
Human or AI Marginal Mean Lower Upper SE
AI 21.27 19.93 22.61 0.66
Human 24.22 23.29 25.16 0.46
Table 6.6: Marginal means for the effects of AI count and teammate identity on
perceived mutual dependence.
227
6.3.3 Perceived Interdependence
6.3.3.1 Perceived Mutual Dependence
For perceived mutual dependence, the effects of teammate identity (F(1,30)
= 22.23, p < 0.001, η
2
= 0.15) and AI count (F(1.99,59.63) = 6.94, p = 0.002, η
2
= 0.02) were found to be significant, and the effect of training type was found to be
insignificant (F(1,30) = 1.04, p = 0.316, η
2
= 0.02). A posthoc analysis of the signifi-
cant effects revealed that participants perceived themselves as being significantly less
mutually dependent with AI than human teammates (t(30) = 4.71, p
holm
< 0.001, d
= 0.82). Additionally, when compared to one AI teammate, interacting with two AI
teammates (t(30) = 3.47, p
holm
= 0.003, d = 0.38) and three AI teammates (t(30)
= 2.92, p
holm
= 0.010, d = 0.32) resulted in significantly worse perceived mutual
dependence.
These effects are highly interesting as one can see that despite AI teammates
generally having performed better across the board, participants still felt mutually
dependent on their human teammates more so than their AI teammates. However,
the number of AI teammates did impact this feeling as having more AI teammates
generally made it harder to form a perception of mutual dependence, even if that
perception was targeted towards a human teammate. These results might suggest
that humans are more considerate of how dependent their teammate is on them
rather than how dependent they are on their teammate, which would help explain
this effect.
6.3.3.2 Perceived Conflict
Regarding the conflict humans perceived, the effects of teammate identity
(F(1,30) = 4.54, p = 0.041, η
2
= 0.02) and training type (F(1,30) = 6.00, p = 0.020,
228
Figure 6.4: Figure of perceived conflict based on training type and whether or not the
perception is towards the human or AI teammate. Error bars denote 95% confidence
intervals.
95% CI for Mean Difference
Human or AI Marginal Mean Lower Upper SE
AI 9.87 9.19 10.55 0.33
Human 9.16 8.43 9.90 0.36
95% CI for Mean Difference
Training Condition Marginal Mean Lower Upper SE
Solo 8.77 7.89 9.65 0.43
Together 10.26 9.38 11.14 0.43
Table 6.7: Marginal means for the effects of training type and teammate identity on
perceived conflict.
η
2
= 0.10) were found to be significant, but the effect of AI count was not significant
(F(1.69,50.03) = 0.50, p = 0.577, η
2
= 0.00). A posthoc analysis of the main effects
revealed that perceived conflict was significantly higher between participants and AI
teammates than participants and human teammates (t(30) = 2.13, p
holm
= 0.041, d =
0.31). Additionally, participants that trained together reported significantly greater
perceived conflict than those that trained alone (t(30) = 2.45, p
holm
= 0.020, d =
0.66).
These effects are extremely pertinent to this study as they demonstrate that
229
AI influence had the ability to disrupt existing team processes and create conflict,
even between human teammates. This conflict was even greater when participants
were given the chance to establish norms and understandings of each other. However,
what is interesting is that this conflict does not seem to be at all correlated with
the number of AI teammates on a team, which means the simple existence of AI
teammates might disrupt existing human interdependence in a negative way.
Figure 6.5: Figure of perceived power one has compared to others based on the
number of AI teammates and whether or not the perception is towards the human or
AI teammate. Error bars denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Human or AI Marginal Mean Lower Upper SE
1 AI 9.75 8.31 11.18 0.70
2 8.30 7.06 9.53 0.61
3 7.64 6.63 8.65 0.50
1 Human 11.89 11.15 12.63 0.36
2 11.14 10.37 11.91 0.38
3 11.53 10.71 12.35 0.40
Table 6.8: Marginal means for the effects of number of AI teammates and teammate
identity on perceived power compared to others.
230
6.3.3.3 Perceived Power Compared to Others
Examining the perceived power participants felt, the effects of teammate iden-
tity (F(1,30) = 38.83, p < 0.001, η
2
= 0.21) and AI count (F(1.97,59.17) = 9.40, p <
0.001, η
2
= 0.03) were found to be significant, and the effect of training type (F(1,30)
= 0.74, p = 0.396, η
2
= 0.01) was insignificant. However, the interaction effect be-
tween AI count and teammate identity was also found to be significant (F(1,30) =
4.54, p = 0.041, η
2
= 0.02).
An analysis of the simple main effects revealed that the effect of AI count
was significant for the power participants perceived compared to their AI teammates
(F(2,30) = 11.46, p < 0.001) but not their human teammates (F(2,30) = 2.156, p =
0.125). Post-hoc analysis revealed that when compared to working with one AI team-
mate, participants perceived significantly lower levels of perceived power compared
to AI teammates when they worked with two AI teammates (t(30) = 3.56, p
holm
=
0.004) and three AI teammates (t(30) = 5.16, p
holm
< 0.001).
Perception differences for human and AI teammates were found to be sig-
nificant when there were one AI teammate (F(1,30) = 17.21, p < 0.001), two AI
teammates (F(1,30) = 17.21, p < 0.001), and three AI teammates (F(1,30) = 17.21,
p < 0.001). Participants perceived themselves to have more power compared to their
human teammates than the power they felt compared to their AI teammates after
interacting with one AI teammate (t(30) = 3.80, p
holm
= 0.003), two AI teammates
(t(30) = 5.05, p
holm
= < 0.001), and three AI teammates (t(30) = 6.90, p
holm
<
0.001).
These effects signify that the number of AI teammates reduced how powerful
participants felt in regard to their AI teammates, but the power participants felt
in comparison to other humans was maintained consistently. As a result, a gap,
231
or power disparity, was effectively created where humans felt increasingly powerless
against AI teammates as they could compare that power to the power they felt with
their human teammate. This gap may result in an in-group out-group scenario where
humans focus on the relationships that they feel they have power in, which are their
human-human relationships. Moreover, this gap provides further demonstration of
how increasing AI teammate populations can drive a potential wedge between humans
and AI teammates.
Figure 6.6: Figure of perceived future interdependence from teammates to one’s self
based on the number of AI teammates and whether or not the perception is towards
the human or AI teammate. Error bars denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Human or AI Marginal Mean Lower Upper SE
1 AI 8.97 8.07 9.87 0.44
2 8.55 7.58 9.51 0.47
3 8.64 7.65 9.63 0.49
1 Human 10.42 9.21 11.63 0.59
2 11.45 10.36 12.55 0.54
3 10.92 9.80 12.04 0.55
Table 6.9: Marginal means for the effects of number of AI teammates and teammate
identity on perceived future interdependence from the teammate to one’s self.
232
6.3.3.4 Perceived Future Interdependence: Teammate to Self
Looking at the future interdependence perceived from others, the effect of
teammate identity (F(1,30) = 20.32, p < 0.001, η
2
= 0.13) was significant, but
the effects of AI count (F(1.88,56.39) = 0.76, p = 0.466, η
2
= 0.00) and training
type (F(1,30) = 0.70, p = 0.411, η
2
= 0.01) were insignificant. Additionally, the
interaction effect between AI count and teammate identity was also found to be
significant (F(1.85,55.49) = 3.37, p = 0.045, η
2
= 0.01).
An analysis of the simple main effects revealed that the effect of AI count
was significant for the future interdependence participants felt from other human
teammates (F(2,30) = 3.74, p < 0.029) but not with their AI teammates (F(2,30)
= 0.67, p = 0.514). Specifically, posthoc analysis perceived interdependence from
human teammates borderline significantly increased from when humans work with
one AI teammate to when they worked with two AI teammates (t(30) = 2.72, p
holm
= 0.053).
Simple main effects for human and AI teammates were also found to be signifi-
cant when there was one AI teammate (F(1,30) = 5.00, p = 0.033), two AI teammates
(F(1,30) = 28.86, p < 0.001), and three AI teammates (F(1,30) = 16.07, p < 0.001).
Post-hoc analysis revealed that participants’ perceived future interdependence was
actually not significantly different between human and AI teammates when partici-
pants worked with one teammate (t(30) = 2.47, p
holm
= 0.100), but it was significantly
higher for human teammates when participants worked with two AI teammates (t(30)
= 4.94, p
holm
< 0.001) and three AI teammates (t(30) = 3.88, p
holm
= 0.003).
These effects signify that participants felt that their human teammates were
more likely to be interdependent with them than their AI teammates. However,
the interaction effect actually shows that, unlike other measures, AI count actually
233
impacted the perceptions participants formed of other humans. Specifically, when
working with two AI teammates, participants more strongly expected their human
teammates to be interdependent with them. Similar to the concept of perceived
power, these findings also suggest that potential in-group out-grouping could oc-
cur, but this outcome would actually be driven by the impact of AI influence on
human-human perceptions and not human-AI perceptions. Thus, one can see that
the potential wedge between human and AI teammates can be created by driving
human-human perceptions higher even if human-AI perceptions go unchanged.
Figure 6.7: Figure of perceived future interdependence from one’s self to teammates
based on the number of AI teammates. Error bars denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Marginal Mean Lower Upper SE
1 10.45 9.37 11.52 0.53
2 11.13 10.02 12.24 0.54
3 11.14 10.07 12.21 0.52
Table 6.10: Marginal means for the effects of number of AI teammates on perceived
future interdependence from one’s self to teammates.
234
6.3.3.5 Perceived Future Interdependence: Self to Teammate
When examining the perceived future interdependence participants felt they
would have for their teammates, the effect of AI count (F(1.72,51.69) = 3.86, p =
0.032, η
2
= 0.01) was significant, but the effects of teammate identity (F(1,30) =
2.24, p = 0.145, η
2
= 0.01) and training type (F(1,30) = 0.07, p = 0.795, η
2
= 0.00)
were insignificant. A posthoc analysis of the effect of AI count revealed that, when
comparing to experiences with a single AI teammate, participants formed borderline
significantly lower perceptions of future interdependence when interacting with two
AI teammates (t(30) = 2.39, p
holm
= 0.053, d = 0.20) and three AI teammates (t(30)
= 2.44, p
holm
= 0.053, d = 0.21).
These effects demonstrate that the number of AI impacted how participants
saw themselves being interdependent with their teammates in future games. Specif-
ically, participants expected to be more interdependent with their future teammates
after working with a greater number of AI teammates. Moreover, they saw this future
regardless of whether it is with their AI teammates or their human teammates. Given
the previous finding, this finding is highly interesting as it suggests that humans are
willing to be more interdependent with AI teammates, even if those teammates are
not expected to be interdependent with them. This is a highly interesting finding
as it lends further credence to previous findings on how humans are willing to adapt
around their AI teammates, even if those teammates do not do the same. However,
this effect also suggests that the gap between human-human and human-AI may not
fully extend to how humans’ will behaviorally adapt as they do expect themselves to
adapt to their teammates in the future.
235
Figure 6.8: Figure of perceived future interdependence from one’s self to teammates
based on the number of AI teammates, teammate identity, and the training partici-
pants had. Error bars denote 95% confidence intervals.
6.3.3.6 Perceived Information Certainty: Teammate to Self
When examining information certainty participants perceived from their team-
mates to themselves, there was a three-way interaction effect between the effects of
AI count, teammate identity, and training type (F(1.72,51.69) = 3.86, p = 0.032, η
2
= 0.01).
When maintaining the level of training type to solo training, the effect of
AI count on information certainty was not significant for either human teammates
(F(2,30) = 1.85, p = 0.174) or AI teammates (F(2,30) = 2.11, p = 0.139). When
maintaining the level of training type to together training, the effect of AI count
236
on information certainty neared significance for AI teammates (F(2,30) = 3.06, p =
0.062), and it was not significant for human teammates (F(2,30) = 0.31, p = 0.739).
When examining the effect of teammate identity, it was significant for each level of
AI count for both levels of training type, suggesting that the main effect of teammate
identity (F(1,30) = 54.32, p < 0.001, η
2
= 0.27) is consistently significant across all
conditions. On the other hand, the main effect of training type was shown to be
insignificant for each maintained condition group.
This analysis shows that participants had generally lower information certainty
of their teammates when concerning AI teammates in comparison to human team-
mates. Additionally, their information certainty for AI teammates somewhat trended
down as the number of AI teammates increased, but only when they trained with
their other human teammates, but this trend was reversed when participants train
alone. This effect signals that participants had a better understanding of how their
human teammate’s actions impacted themselves than their AI teammates’ actions,
but their understanding of their AI teammate’s actions could potentially be moder-
ated by a combination of both their experience with their human teammate and how
many AI teammates they interact with. Thus, it may be important to understand
that humans may find it more difficult to learn about their AI teammates when they
have previously established levels of interdependence with other humans, which may
further the identified perception gap between humans and AI teammates.
6.3.3.7 Perceived Information Certainty: Self to Teammate
For information certainty from one’s self, the effect of teammate identity
(F(1,30) = 18.63, p < 0.001, η
2
= 0.04) was significant, but the effects of AI count
(F(1.70,50.97) = 0.87, p = 0.424, η
2
= 0.00) and training type (F(1,30) = 0.27,
p = 0.608, η
2
= 0.01) were insignificant. A posthoc analysis of the effect of team-
237
Figure 6.9: Figure of perceived information certainty from one’s self to teammates
based on teammate identity. Error bars denote 95% confidence intervals.
95% CI for Mean Difference
Human or AI Marginal Mean Lower Upper SE
AI 13.64 12.41 14.87 0.60
Human 15.15 13.98 16.33 0.58
Table 6.11: Marginal means for the effects of teammate identity on on perceived
information certainty from one’s self to their teammates.
mate identity revealed that participants perceived significantly greater information
certainty from themselves to others when those perceptions were towards human
teammates than their AI teammates (t(30) = 4.32, p
h
olm < 0.001, d = 0.76).
This effect demonstrates that participants ultimately felt they had a greater
understanding of how their behaviors influenced their fellow human teammates. Ul-
timately, this is a critical finding, as it demonstrates the limitations in understanding
what humans are able to achieve for their AI teammates, especially when their team-
mates have minimal communication and explanation. However, the same limitations
existing with their human teammates did not create the same perception. Ultimately,
this would suggest that participants innately understood how their humans perceived
and understood their own actions. This finding may potentially help understand the
existence of the prior identified gaps as this lack of understanding may be what drives
238
human-AI perception down and human-human perception up at times.
Figure 6.10: Figure of perceived workload based on the number of AI teammates.
Error bars denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Marginal Mean Lower Upper SE
1 62.75 55.77 69.73 3.42
2 60.92 54.04 67.81 3.37
3 57.11 50.71 63.51 3.13
Table 6.12: Marginal means for the effects of the number of AI teammates on per-
ceived workload.
6.3.4 Perceived Workload
Given that perceived workload is not a perception that has a unique value for
each teammate, the following analysis can only focus on the manipulated effects of AI
count and training type. The effect of AI count on perceived workload was significant
(F(1.80,53.94) = 6.66, p = 0.004, η
2
= 0.02), but the effect of training type was
not significant (F(1,30) = 0.20, p = 0.662, η
2
= 0.01). Post-hoc analysis of the
effects of AI count showed that participants perceived significantly lower workload
when working with three teammates as opposed to one teammate (t(30) = 3.58,
p
h
olm = 0.002, d = 0.63) and when working with three AI teammates as opposed
239
to working with two AI teammates (t(30) = 2.42, p
h
olm = 0.037, d = 0.43). This
effect clearly shows that participants felt that the addition of AI teammates reduced
their workload; however, it should be determined whether this is because participants
simply did less work when there are more AI teammates or if they found the task
itself easier. This is an important consideration to explore as interdependence and
perceived performance data would suggest that humans were continuously working
together even when working with a greater number of AI teammates.
Figure 6.11: Figure of AI teammate acceptance based on the number of AI teammates.
Error bars denote 95% confidence intervals.
95% CI for Mean Difference
AI Count Marginal Mean Lower Upper SE
1 19.88 18.93 20.82 0.46
2 20.48 19.32 21.65 0.57
3 20.92 19.69 22.16 0.61
Table 6.13: Marginal means for the effects of the number of AI teammates on AI
teammate acceptance.
6.3.5 AI Teammate Acceptance
Similar to perceived workload, acceptance of AI teammates can only be evalu-
ated using the two manipulations. The effect of AI count on AI teammate acceptance
240
trended significant (F(1.98,59.50) = 2.53, p = 0.089, η
2
= 0.02), and the effect of
training type was insignificant (F(1,30) = 1.05, p = 0.315, η
2
= 0.03). These two
effects signify that marginal acceptance could potentially be influenced by the number
of AI teammates, but those effects would be fairly minor. However, given the results
of other studies in this dissertation, this effect does not signify that humans do not
have a preference for a type of AI teammate. Rather, this effect shows that the accep-
tance of AI teammates may not be fully dependent on how teammates trained or how
many AI teammates participants had. Given the earlier effects found in this study,
this would also suggest that acceptance for AI teammates is not solely based on their
perceived performance, which significantly trended upward based on the number of
AI teammates. This finding lends further credence to the complex requirements that
can promote the acceptance of AI teammate influence.
6.3.6 Quantitative Summary
Generally speaking, the quantitative results for Study 3 can be best summa-
rized by discussing the gaps observed between the perceptions participants formed
for AI teammates and human teammates. While participants saw greater task and
perceived performance from AI teammates, they often felt much more interdependent
with their human teammates. This is a highly interesting finding when one examines
the specific finding of mutual dependence, which was stronger between humans than
AI teammates. At first thought, this effect is strange as mutual dependence is how
much one depends on another for success, which one would expect to be positively
correlated to performance. However, this is not the case, and in fact, we see the
opposite as human-AI interdependence generally worsens the greater performance AI
teammates had, which was driven by an increase in AI influence. Thus, we see that
241
the most interesting gap found within this study, which is the gap between percep-
tions of human and AI teammates, is actually driven and stretched by the teaming
influence of AI teammates. This is highly apparent when looking at the measure of
perceived performance, where a gap between human and AI teammates is only cre-
ated by an increase in AI teammate teaming influence despite AI teammates always
outperforming humans. This teaming influence then becomes social influence that
mostly drives the perceptions of AI teammates away from humans, but it does have
the potential to move the needle that is human-human perception away from that of
human-AI perception. Thus, the concept of AI teammate social influence ultimately
creates the potential for in-group out-grouping, as humans may focus on the connec-
tions they see as stronger, which are those with humans. Given this, the qualitative
analysis of this work heavily focuses not on what humans’ preferences were but rather
if humans understood this division and gap and how they adapted around it.
242
6.4 Study 3: Qualitative Results
The goal of the following qualitative report is to provide a greater level of ex-
planation to the effects found by the quantitative results. Based on the quantitative
results, this exploration will focus on the gap identified between human-human and
human-AI interdependence, which is widened by an increasing number of AI team-
mates. Exploring this gap qualitatively will create an understanding of why humans
have a natural perception gap between other humans and AI and why that gap grows
when more AI teammates are present.
6.4.1 Increasing AI Influence Betters Behavioral Influence
but Worsens Information Interdependence
One of if not the most interesting qualitative findings in this study regard the
difference between the perceived interdependence observed in the quantitative results
and the actual behavioral interdependence of participating teams. The quantitative
results previously discussed showed that human-AI interdependence generally wors-
ened as more AI teammates were added to a team, and it actually benefited some
forms of perceived human-human interdependence. Based on participants’ post-task
interviews, it appears that the worsening perceived interdependence may not be fully
indicative of actual human behavior. In fact, it seems that participants often be-
came more interdependent, from a behavior perspective, when there were more AI
teammates.
Generally, a large degree of participants adapted to become more interdepen-
dent with their AI teammates. This was apparent as multiple participants noted that
they began disrupting the goalie to make it easier for the AI teammates to score goals.
This shift is a clear indication of task and behavioral interdependence as two different
243
roles were created with the goal of ensuring overall team performance. Ultimately,
these two roles were distinct from each other and provided teams with a means of
maximizing performance. For example, participants P25 and P66 noted that this was
the strategy they formed to better AI teammate performance:
I realized in the second game, that if I ran my car into the goalie, there
was a brief moment, a little less than a second, probably, that the goalie
was incapacitated... I exploited that as hard as I could. And pretty much
every single time I respawned, I went straight for the goalie and use my
boost if I could. And so usually, I would say like 90% of the goal resets
occurred less than a second after I had crashed into the goalie. So I would
say that method was a success. (P66, 18, White, Non-Binary)
The second round. I did switch up my strategy... I tried to go into the
goalie more and blown up the goalie when I saw him coming up, like blow
up the goalie so that they could score... I was hoping they would pick up
on it. (P25, 18, White, Male)
Interestingly, this adaptation was also not something humans did alone as
both humans in some teams elected to follow this strategy, with one teammate often
learning the strategy from another. Most notably, this finding even demonstrates
that the social influence of AI teammates is not entirely direct, as AI teammates can
encourage human adaptation through the learned adaptation of other humans. In
regards to heterogeneous teams, this finding demonstrates that AI influence is not
limited to having the direct social influence confirmed by previous studies. Partic-
ipants P36 and P65 are clear examples of this indirect social influence, which also
resulted in a greater task interdependence when comparing humans and AI:
244
We figured out... without even talking kind of figured out how to beat
the goalie. It was just like we would demo and then score. That’s how we
had the most points in that game. (P36, 18, White, Male)
Once we figured out that we were kind of both going after the goalie,
and then it was like, easier for us to like, work with each other (P65, 18,
White, Latino or Hispanic, Female)
However, this finding becomes more interesting when one considers how in-
creasing the number of AI teammates furthered this trend. In fact, participants
more often found themselves pursuing this task interdependence when there were a
higher number of AI teammates. While an initial examination of the quantitative
results would imply that interdependence generally worsened, multiple participants
were most receptive to this behavioral and task shift when there were more AI team-
mates than human teammates. For instance, participants P22 and P19 noted that
they personally did not leverage this strategy unless there were three AI teammates:
I actually started trying to push the goalie away but that was when there
were more AI’s, but when it was just one I guess I was just playing like
normal. (P22, 19, White, Male)
I just stopped worrying about the ball. And I started to try and like
distract the goalie. So I just ran into the goalie, so other people could go
into the goal, because I knew I wasn’t going to be able to touch the ball.
Like I did other things to help the team... when there is three because it
was less likely that I was going to touch the ball... I gave up on trying to
get involved that way and found a different way to help the team. (P19,
18, White, Female)
245
When coupled with the quantitative results of this study, the above is some-
what perplexing as humans clearly recall becoming interdependent with AI team-
mates. However, the qualitative results demonstrate almost the opposite effect of in-
creasing AI teammate teaming influence on actual behavioral interdependence. When
searching for a potential explanation for this difference, the variance between differ-
ent types of interdependence becomes a critical consideration. Within human-human
teaming, this is not an unfamiliar concept, as interdependence often takes multiple
forms [430]. Specifically, the behavioral interdependence discussed above is a critical
form of team interdependence and represents how teammates act to complement one
another [107]. Alternatively, the perceived interdependence quantitatively measured
may more heavily denote informational interdependence, which refers to the inter-
dependent exchange of information [107]. Distinguishing between these two types
of interdependence helps explain why the quantitative and qualitative results are at
first glance opposed. The existence of these two forms of interdependence alongside
the above results leads to the conclusion that AI influence in heterogeneous teams
actually has different impacts based on the interdependence being examined.
When operating under the above conclusion, a further examination of the qual-
itative results revealed that humans do in fact forgo informational interdependence
for the sake of behavioral interdependence when experiencing AI teaming and social
influence. At times, participants actually found themselves adapting into a goalie
disruption role while also intentionally ignoring their AI teammates. This became
especially important when there were three teammates on the field as humans felt
overwhelmed by the amount of information, and adapting to a more isolated role
allowed them to better focus. Some participants even felt that this shift created
246
two separate groups within the team, further isolating the human and the AI team-
mates from each other. Participant P38 was a clear example of an individual that
underwent this process by adapting their behavior while also forgoing awareness of
AI teammates:
Yeah, I feel like [with] three, there’s like so much going on. I just tried
to like keep hitting the goalie... I don’t really know if it did anything but
just stop him from blocking the ball. (P43, 18, Asian, Female)
It’s easier... to work as a team so that the humans have team within the
team and the AI’s have a team within a team, but like it’s easier to get
more things done that way. (P38, 18, White, Female)
The above points to one of the most critical findings of this study, which is that
the effects of AI teammate teaming and social influence are not uniform across types of
interdependence. At a surface level, this demonstrates that AI influence can seemingly
benefit one type of interdependence and simultaneously harm another. However, a
more holistic view of the above reveals that the power of AI influence also changes
based on the type of interdependence examined as AI influence was able to both
directly and indirectly affect humans when concerning behavioral interdependence.
Given this, the prior quantitative conclusion regarding the human-human and human-
AI interdependence gap should be revised to the following: When compared to human-
human interdependence, increasing the amount of AI teammate teaming influence in
heterogeneous teams can worsen human-AI information interdependence while also
bettering behavioral interdependence.
247
6.4.2 Humans Create Different Strategic and Understanding
Models for Human and AI Teammate Teaming Influ-
ence
While the prior theme demonstrates how AI influence ultimately widens the
gap between perceptions of AI and human teammates, the quantitative results demon-
strated there was also a general gap between perceptions even when AI had little
teaming influence. Moreover, this gap is highly interesting as AI teammates con-
sistently had lower perceived interdependence than other human teammates despite
having higher actual and perceived performance. Thus, to holistically understand
the gap between human-human and human-AI perceived interdependence it is also
important to understand where the base gap, which is then widened by the above,
stems from. Firstly, the following theme details how a portion of this gap is most
likely created by the understandings participants created for both their other human
teammates and their AI teammates.
Specifically, while past work has identified that the mental models for human
and AI teammates are different [390], the qualitative analysis of this study revealed
that the structure of these models might also be different. Ultimately, these differ-
ences, which especially impact information organization, would in turn impact the
information interdependence humans form. Thus, a gap between human and AI team-
mate perception would be likely. Participant P66 was a clear example of how these
understandings differ:
I feel like the way that I understand a human teammate is fundamen-
tally different from the way I would try to understand an AI. Because I
know that AIs are programmed to do a certain thing and that they have
formulate actions by design. And so I would never, once I realized that
248
that’s what the program was, I never would have anticipated my AI it may
take a different route. But I might expect my human teammate to try
something different, like maybe based on the response position, or maybe
she might have adapted to the different things that AI was doing. And I
understood I understand both, but it’s a different type of understanding
that has to take place. (P66, 18, White, Non-Binary)
However, the mere existence of this gap does not explain why AI teammates
were often placed lower than human teammates when regarding perceived interde-
pendence. Further exploring this theme, it was revealed that humans actually created
more shallow understandings of AI teammates. Often, these understandings consisted
of the actions AI teammates could and did perform, but humans often did not actually
create an understanding of their motives and methods. As a result, the understand-
ing humans created for AI teammates were somewhat lacking a deeper meaning and
explanation. Not only is the above quote from participant P66 an example of this,
but the following quote from participant P56 also provides further evidence of this
shallow understanding:
I don’t really think I could predict what they were doing. I know that
their goal was to score and because they were on our team. So it was to
score against the goalie, but I couldn’t necessarily predict their actions,
just that they were gonna go towards the ball. (P56, 19, White Female)
In comparison to the understandings participants created for other human
teammates, they were often more complex. Ultimately, the shared existence of hu-
mans creates a level of natural shared understanding where humans assume that their
understanding for other humans is similar to their understanding of themselves. Thus,
if there are gaps in one’s understanding of another human teammate, then one can
249
fill in those gaps without actually gaining that understanding. As a result, partic-
ipants naturally created a perceived shared information interdependence with their
human teammates, but this was not possible for their AI teammates. The follow-
ing quotes provide clear examples of how humans see the creation of human-human
understanding easier due to these reasons:
It’s kind of hard to predict what the AI is thinking. But you can predict
what the humans thinking. (P03, 18, White, Male) ... I would assume
her plan so then I would think okay, she’s assuming my plan. (P50, 19,
White, Female) I feel like I always knew that the AI, we’re just gonna just
go for it. But I feel like maybe my human teammate would have more
like a strategy. (P21, 18, White, Female)
These differences are most apparent when participants acknowledged that hu-
mans and AI shared similar goals but not similar motives, as evidenced by P56’s
above quote. Ultimately, the sharing of a similar goal is not enough to create a
strong perceived information interdependence between humans and AI. Rather, it is
the perception of a shared experience between humans that allows them to create a
perceived understanding of each other, even if that understanding is not grounded in
reality. Thus, it may be difficult to close the general gap created by these differences
as humans may always more heavily relate to other humans simply due to the nature
of their existence. Thus, as AI influence is able to impact humans, heterogeneous
teams may actually perceive human influence as healthier and stronger merely due
to the humanity shared between them.
250
6.4.3 Humans have Stronger Expectations for their AI Team-
mates than their Human Teammates
Importantly, the different understandings humans achieved during interac-
tion were not the only factors creating the general gap identified between human-
human and human-AI perception. Additionally, it was found that humans form
much stronger expectations for the expected performance of their AI teammates than
their human teammates, with human expectations often being more centered around
effort. Specifically, humans often felt that their AI teammates were going to be highly
performative, but they often did not form any strong expectations for their human
teammates. As such, the general gap discovered within human perception may actu-
ally exist before interaction even begins as these expectations are present going into
team interactions. The following two quotes by participants P09 and P30 illustrate
the dichotomy between the formation of these two expectations.
My expectations were just that my [human] teammate would give a valiant
effort, like give a good effort, go for the ball, try to score and not just
sit there and mess around. (P09, 18, White, Male) Yeah, I kind of just
expected them to be like good at the game, but they’re like robots created
to be good at this. (P30, 18, White, Female)
Ultimately, these stronger expectations for AI teammates were not driven by
any teammate-specific information but rather by fundamental expectations of AI
systems. Given that the task completed was in a digital environment, participants felt
that the AI systems were naturally skilled at the task as if the humans were operating
in the AI teammate’s domain. As such, humans that were less skilled created a
fundamental understanding that the AI teammates were already more trained and
251
skilled at the task than them. Importantly, humans were not wrong in this assumption
as the AI teammates have been heavily trained in the environment; however, it is
interesting that this mentality not only existed going into the task but also impacted
the formation of their perceptions. The following quote by participant P68 illustrates
the generally higher expectations formed for AI teammates prior to interaction:
They know what they’re doing a lot better than we do, because they were
made for that express purpose of put ball in goal. (P68, 18, White, Male)
More interestingly however is that similarly strong expectations were formed
by highly skilled players, but those expectations tended to be more negative. Often
these negative perceptions are driven by past experience with AI teammates, which
participants often noted as being fairly unskilled. While these perceptions were dif-
ferent from unskilled participants, they were similarly strong with these skilled par-
ticipants often having fairly clear expectations for their AI teammates, even if their
future reality does not meet those expectations. Participants P56 and P52 both
noted that they had previous experience in Rocket League and the following quotes
are representative of the expectations they had for their AI teammates:
Yeah, they were definitely better than I thought they’d be. But then
whenever I think of like artificial intelligence, I think, you know, humans
make mistakes, but technology typically doesn’t. (P56, 19, White, Fe-
male) Well, you said a teammate, I assumed it was going to be like the
Rocket League AI because Rocket League has their own AI and they
suck... One of them hit like a crazy shot, like a really good shot. And
I was like dude, what? I was like, I didn’t know, they made AI for this
game that was like this, like good. But yeah, my expectations were low.
252
And they were definitely better than I expected them to be. (P52, 18,
White, Male)
However, while the above clearly shows why a gap might exist in general, it is
still critical to understand why the created gap has human-AI interdependence lower
than human-human interdependence. For low-skilled players, this lower perception
potentially comes from an expectation that the AI teammates will not need help
from them as the AI is highly skilled and the human is not. Alternatively, higher-
skilled players, which often have past experience with AI teammates, often expect
AI teammates to be more simplistic tools. As such, while the familiarity one has
with a domain ultimately determines why they form an expectation, the result of
said expectation is similar in that neither party will be demonstrably capable of
helping the other. Participant P51, who had prior experience in Rocket League, and
participant P12, who had no experience in Rocket League, provided the following
quotes that illustrate this finding:
I guess basically... with AI’s, they have more control because they know
how to play. (P12, 18, Black or African American, Female) I know what
what they were going to do, but I don’t necessarily think they knew what
I was gonna do. So I kind of had to just work around using them, like
kind of using them as a tool, as opposed to like playing with them as a
teammate. (P51, 20, White, Male)
The above illustrates that regardless of the skill level of a human, they often
find a reason to create a strong expectations for AI teammates. In regard to social
influence, this also means that humans would potentially form expectations around
the concept as well. For instance, low-skilled players would not expect themselves
to be able to socially influence their AI teammates, and high-skilled players would
253
not expect their AI teammates to be able to socially influence themselves. As a
result, while humans may form behavioral and task interdependence, a gap between
information interdependence naturally forms due to these expectations. This gap
may inadvertently bias human teammate perception, ultimately preventing healthy
behavioral change. As a result, the adaptations identified in the first theme of this
analysis may ultimately be perceived as unhealthy, even if it ideally uses human and
AI skill sets and betters overall team performance.
6.4.4 Results Summary
Coupled with the quantitative results, the above qualitative results paint a
vivid picture of AI social influence having vastly different impacts on human per-
ception and human behavior. Ultimately, these differences can be related to the
concept of varying types of interdependence where AI influence can create behavioral
interdependence while inadvertently reducing information interdependence, which is
often akin to the perceptions humans form. As a result of this split, participants be-
came almost isolated from their AI teammates perceptually, and greater levels of AI
teaming influence ultimately grew this isolation. However, human-human relation-
ships and interdependence were perceived as having stability as there are underlying
differences in the expectations and understandings humans form for AI and human
teammates. As a result of these differences a general gap between human-human and
human-AI perception is formed, and increasing AI influence drives these perceptions
further apart. On the other hand, behavioral change that stems from AI influence
ultimately creates strong levels of behavioral and task interdependence, resulting in
human-human and human-AI task relationships looking remarkably different. As a
result of these differences, one can come to the conclusion that AI social influence does
254
in fact have strong power in heterogeneous human-AI teams, but humans ultimately
prefer the company and interaction of other human teammates.
255
6.5 Study 3: Discussion
The results of this study not only provide highly critical findings, but they
also provide critical insights that can be used for discussion. The repeated gap shown
between human and AI perception, which was actually shown to be isolated to per-
ception and not behavior, merits critical consideration from researchers. Moreover,
the identification that pre-existing expectations, which were not seen in standard
individual differences measures, played such a large role in creating a gap between
human and AI perception merits similar consideration. Thus, the following discus-
sion further elaborates on the implications of this general gap for human-AI teams as
well as the importance of distinguishing interdependence types in human-AI teams.
Finally, critical design insights that can be used for future human-AI teams are made
with the goal of ensuring the results of this research have actionable outcomes.
6.5.1 Human Expectations Regarding Influence Naturally Fa-
vors Other Humans
Expectations are a critical component of not only AI research [402] but also
this dissertation, which saw Study 2 heavily inspect the expectations humans form.
The results of this study present two critical human expectations that could impact
the success of human-AI teamwork: expectations in understanding; and expectations
in expectation. To ensure the success of human-AI teams, developers and practi-
tioners will need to consider and design around these expectations, and the following
discussion works to provide a foundation for this process.
For expectations in understanding, this study found that humans often cre-
ated more shallow expectations for their AI teammates than their human teammates.
However, it’s important to note that these expectations were not created because
256
the humans provided more information to them, but rather because humans created
assumptions of relatability between other humans, which allows them to fill gaps in
understanding with humans but not AI. Thus, it is a potentially important considera-
tion, especially when considering the addition of AI teammate communication, as the
burden for understanding may be greater for AI teammates than human teammates
due to their unfamiliar nature. Unfortunately, this greater burden may be difficult to
overcome as simply achieving human levels of communication in AI systems has been
a multi-decade process that is not yet achieved [220], and human levels of communi-
cation may not even be ideal due to this gap. As a result of this challenge, human-AI
teaming design unique ways to overcome this challenge, such as past methods that
leverage teammate transparency over natural language communication [82]. Through
the continued creation of these design methods coupled with natural language utiliza-
tion, future humans will be able to overcome the information barriers that prevent
the creation of deep understandings for AI teammates.
While understanding expectations can be remedied through intentional team-
mate design, expectations challenging AI teammates require research to look prior
to team interaction. This study found that approaching team interaction, humans
often create either strongly positive or strongly negative expectations for their AI
teammates. However, neither of these expectations is ideal as increasingly low ex-
pectations could ultimately lead to ignorance and rejection of AI teammates, and
increasingly high expectations could potentially discourage humans from working as
teammates, both of which are evidenced by this study’s results. Thus, large efforts
should be placed on the calibration of these expectations prior to interaction through
targeted training material; however, to date, there have been no explicit explorations
of how to design training material to best calibrate these expectations. As such, it
is imperative that research rapidly address these expectations through prior training
257
materials to ensure that both practicing and researching human-AI teaming is not
biased heavily by human expectations.
Despite human expectation not being a new concern to the design of tech-
nology [331, 349], this study demonstrates that these expectations present a unique
challenge within human-AI teams. First, as these teams can consist of multiple hu-
mans, multiple different or similar expectations can exist in parallel to each other.
Second, the downstream effects of these expectations are often similar, meaning that
their multitude, regardless of the type of expectation, could potentially compound
into a highly negative environment for AI teammates. As such, the critical research
efforts highlighted above should be pursued to design human-AI teams and AI team-
mates in a way that benefits both human and AI teammates, which will ultimately
ensure a symbiotic, productive, and long-term relationship.
6.5.2 Not all Interdependence is Created Equal in Human-
AI Teams
Arguably, the most interesting finding within this study was the identification
of differentiated interdependence within human-AI teams. As a note, the four types
of interdependence are task [444], behavioral [107], information [107], and time [171].
While the differentiation between types of interdependence is a critical consideration
within teaming [430], what makes this finding interesting here is the concept of AI
design, which can have somewhat opposing effects on types of interdependence. As
such, designing AI teammates has to take a holistic but differentiated consideration
of interdependence, with the understanding that an ideal AI teammate design may
not be able to maximize all types of interdependence. Specifically, two factors should
be considered during this design process of AI teammate teaming influence: (1) the
258
interdependence needs of the task; and (2) other design considerations within the AI
teammate that could interact with their social influence.
For consideration (1), within this study, variations of AI teammate teaming
influence were shown to have dichotomous relationships with behavioral and informa-
tion interdependence. In regard to this design, this means that the design of an AI
teammate should be dependent on the interdependent needs of a task or context. For
the design of teaming influence, high levels of AI teammate teaming influence should
be leveraged in tasks that necessitate large degrees of task and behavioral influence,
but less AI teammate teaming influence should be leveraged in tasks that rely on
informational interdependence. For instance, while this task benefited from high be-
havioral interdependence, UAV human-AI teams, which rely on the rapid acquisition
and transfer of information [285], would benefit from an AI teammate that potentially
leverages a lower degree of teaming influence. This is just one example of how AI
teammate teaming influence should vary outside of the context of this experiment,
but that is not to say that teaming influence is the only design consideration that can
impact interdependence.
For consideration (2), it is important to note that the findings regarding vari-
ations in influence could potentially change if other design considerations in an AI
teammate also changed. For instance, in this task, the AI teammates that leveraged
greater levels of teaming influence were unable to provide a means of communication
to and from human teammates. As a result, the means of collecting information from
these teammates was solely based on observational awareness. Thus, the ability to
gather information from a greater number of AI teammates became difficult through
these observational means, ultimately resulting in a drop in information interdepen-
dence and a rise in behavioral interdependence. However, this drop in informational
interdependence could be alleviated by the inclusion of AI teammate transparency,
259
which increases awareness [82]. Unfortunately, if the addition of transparency in-
creases information while decreasing behavioral interdependence, due to the advent
of complacency [333, 365], then the net effect of this addition coupled with the effect
of increasing teaming influence would be a net zero, making both additions a potential
waste of time and effort if the goal is to increase interdependence in a context.
Given the above, research should begin exploring the concept of teaming and
social influence coupled with the aforementioned design considerations of context and
additional design additions. For context, it would be recommended to begin with
contexts that exclusively utilize each of the types of interdependence listed above.
For instance, while this task has already explored behavioral interdependence, task
interdependence could be explored in a manufacturing setting [273], and information
interdependence could be examined in a UAV or DoD setting [50]. For other design
considerations, context-specific explorations of these factors should be explored to
understand how they interact with teaming influence, which could vary from context
to context. As such, design considerations already shown to be important to human-
AI teams would provide a more manageable starting point, such as transparency [82]
or push-pull communication patterns [85]. Ultimately, the design of AI teammates
around interdependence is going to be an iterative process, but the results of this
work provide a robust foundation from which this process can begin.
6.5.3 Design Recommendations
6.5.3.1 The Number of AI Teammates Should not Exceed the Number
of Human Teammates
Within the qualitative results of this study, it can be seen that humans of-
ten voluntarily become isolated as a team becomes too crowded, which commonly
260
happened when the number of AI teammates exceeded the number of human team-
mates. Moreover, the quantitative results show that human perception can at times
benefit from the addition of AI teammates, but only when going from one to two
AI teammates. Based on these results, it would be recommended that the design of
human-AI teams does not involve the utilization of more AI teammates than human
teammates. Doing so could not only worsen human perception but also create the
in-group out-group type of interdependence seen in this study’s qualitative results,
which would not be healthy for long-term teaming.
This design recommendation also potentially benefits the observed preference
humans had for other human teammates in this study. Ultimately, diminishing the
perception of human contribution through an increase in the number of AI teammates
may ultimately leverage these human preferences to create negative perceptions about
both teammates and teams. However, it is important to note that this recommenda-
tion may not always be straightforward to implement as teams may necessitate the
utilization of a wide array of AI teammates [139], such as in the case of human-swarm
teams that can use hundreds of AI systems [46]. Based on the results of these studies,
the cognitive health of humans may be better served if these large arrays of AI are
abstracted as a fewer number of AI teammates. Regardless, balancing the perceived
contribution of AI and human teammates provides a healthy outlet for humans to
feel both that their contribution is substantial and that they are not overly crowded
by AI teammates.
6.5.3.2 Humans Should Discuss Their Expectations for AI Teammates
Before Working with Them
Two of the qualitative findings of this study detailed how humans allow poten-
tial expectations against AI teammates to impact their interactions and perceptions
261
of the technology. Specifically, expectations of expectation present themselves as eas-
ily manageable through targeted and actionable discussions. Specifically, the goal
of these discussions should be to identify, remove, and recalibrate the expectations
humans have for potential AI teammates. Without this calibration, perceptual gaps
between human and AI teammates will exist and potentially cause long-term power
struggles and conflict, as evidenced by the quantitative results of this study.
However, these discussions should not be simple free-form discussions. Rather,
researchers should work to create guided training material (discussed above) that
walks humans through their own expectations. Importantly, these materials can be
adapted from existing team training that helps teams overcome both implicit and
explicit expectations [129]. For instance, these discussions might adapt by further
discussing the capabilities of potential AI teammates, which is helpful to human-AI
interaction [22] and human-AI teams. Once implemented, this recommendation will
allow human-AI teams to calibrate expectations prior to interaction, in turn ensuring
that humans equally prioritize human and AI interdependence.
6.5.3.3 AI Teammates Should be Added to Newly Formed Teams but not
Existing Teams
One of the more interesting but minor results from this study was the quantita-
tive observation that having human-human training before a task ultimately resulted
in more human-human and human-AI conflict than training online. Based on this
result, it is suggested that the formation of human-AI teams actually focuses on the
creation of new teams and not the adaptation of existing teams, which may perceive
greater levels of conflict. Importantly, while this effect was isolated to the concept
of conflict, the design of the training manipulation created a minuscule amount of
human-human experience when compared to real-world teams, which could train for
262
months and years together. Thus, if just this small amount of human-human under-
standing can lead to large increases in conflict, then existing human-human teams
may be increasingly harmed by the addition of AI teammates.
Importantly, the creation of these new teams would also provide opportune
moments to discuss the expectations mentioned above. Rather than working with
teams that have had their expectations reinforced and normalized over time, these
newly formed teams can quickly identify and mitigate their own expectations while
also learning about their new human and AI teammates. Doing so would provide
the opportunity for both human-human and human-AI interdependence to grow over
time and at hopefully consistent pacing. Ultimately, the addition of AI teammates to
existing teams may ultimately cause more conflict and harm than the performative
benefits are worth.
6.5.4 Limitations and Future Work
Generally speaking, the two broad limitations facing this study are comprised
in the above discussion sections: the use of a single context and the restricted design
of the AI teammates. Ultimately, human-AI teams are going to be dictated by the
contexts and designs of AI teammates. While the single context used in this study
is a limitation, the findings provide a foundation from which the understanding of
human-AI ratios can be further explored. Rather than starting from nothing, future
research can begin to explore how the interdependence gaps explored within this
study may similarly translate to human-AI teams operating in differing scenarios.
Additionally, this study is limited by its user population, which mostly con-
sisted of younger college students, which have limitations in experience. The percep-
tions of these participants do, however, represent future workforces, and their opinions
263
are critical to understanding. Future work should explore how greater numbers of AI
teammates could lead to additional complications in different populations, such as
older individuals or even real-world worker populations. The exploration of these
populations should not, however, replace the understanding of this study but rather
amend and add to them, as a holistic understanding of existing populations is needed
to create human-centered AI teammates.
264
Chapter 7
Final Discussions & Conclusion
This dissertation provides the foundational knowledge to understand the social
influence of AI teammates before they are implemented and impact humans. While
the prior chapters of this dissertation provide discussion around the individual studies,
an explicit discussion of how the results of the three studies come together to form
a larger, more holistic picture of the social influence exerted by AI teammates in
various forms is critical to creating a comprehensive understanding of these unique
teammates. The following chapter, then, explicitly links the prior chapters to provide
a discussion of the knowledge created by this dissertation as a whole. In doing so,
this chapter will (1) directly answer the research questions posed by this dissertation,
(2) evaluate the contributions of this dissertation, and (3) detail the potential future
work that should build on this dissertation.
7.1 Revisiting Research Questions
This dissertation posed four overarching research questions surrounding the
social influence of AI teammates. While each question can be partially answered by
265
each individual study, each question can only be fully answered by the simultaneous
consideration of all three of the completed studies. Moreover, while each study pro-
vides substantial contributions to the field of human-AI teamwork on its own, the
answering of these overarching questions ensures the contribution of this dissertation
is greater than the sum of its parts.
7.1.1 Research Question 1
How does teaming influence applied by an AI teammate become social in-
fluence that affects human teammates?
At the core of this dissertation is the explicit and documented observation of AI
teammate social influence, which is the focus of RQ1. Indeed, for one to understand
how to best design for AI teammate social influence, its foundational existence must
first be understood. Through both individual studies and the overarching connection
between these studies, this dissertation provides the first foundational documentation
of this concept in human-AI teams. Individually, Study 1 and Study 3 demonstrate
that AI teammates can have social influence in both dyadic and non-dyadic teams,
respectively. Meanwhile, this dissertation as a whole demonstrates that the nature of
AI teammate social influence (i.e. its positivity or negativity) is determined by the
design of the AI itself.
Firstly, Study 1 demonstrates that AI teammates’ can alter their teaming
influence through behavior, and humans convert said teaming influence into social in-
fluence by better incorporating the AI teammates placed alongside them in a teaming
task when the following three conditions are met: a sense of control over the teaming
situation; a justification for AI teammates’ presence; and knowledge on how the AI
teammate operates. Importantly, this adaptation is shown to be something that can
266
happen on a reactionary basis rather than as a result of planned integration, demon-
strating that the reception of social influence by AI teammates should not simply
be considered a process derived from the integration of new technology but rather is
more accurately seen as an active consideration of a teammate. Considering these
factors, it is reasonable to state that AI teammate social influence is not going to
be a rare occurrence within teaming dyads, but rather a natural occurrence of AI
teammates becoming more advanced and humans becoming more comfortable with
them.
Additionally, Study 3 demonstrates that AI teammate social influence can co-
exist with human-human social influence in teams that include multiple humans and
AI teammates. Thus, as long as the three conditions outlined in Study 1 are met,
then AI teammates will have both direct social influence over teammate behaviors
and indirect social influence due to humans’ adaptations to each other. This finding
is critical as it demonstrates that the existence of human-human influence, even when
amplified by prior interaction, is not a blocker that prevents AI teammates from hav-
ing social influence. Rather, the presence of other humans in Study 3 demonstrated
that AI teammates can have both direct and indirect social influence, the latter of
which was demonstrated when some participants noted learning from others who
adapted to the AI teammates. Thus, the existence of AI teammate social influence
is not only existent in dyads where humans do not have other humans to adapt to,
but rather is an active and pervasive actor within human-AI teams of all shapes and
sizes.
Examining the dissertation as a whole, the existence of AI teammate social
influence has not only been documented but its impacts have been repeatedly iden-
tified and shown through multiple empirical studies. Based on the results of this
dissertation, it can also be shown that the nature of this social influence innately
267
Figure 7.1: RQ1 Study Relationships
benefits or harms human performance (i.e. is positive or negative). In reality, AI
teammate social influence has not been shown to have innate effective qualities. For
instance, these studies repeatedly found humans adapting in ways that were poten-
tially inopportune, such as how humans in Study 1 gave up on their task. In these
instances, AI teammate social influence could be seen as negative, but Study 1 also
revealed that the disruptive AI teammate teaming influence is actually what leads
to these outcomes. Based on this finding and others, one can see that AI teammate
social influence is not innately positive or negative but rather a facilitator for posi-
tives or negatives of one’s teaming influence. Given this, one should not simply rely
on humans to adapt to AI teammate social influence in a “positive” way, as doing so
might not actually guarantee positive effective outcomes, and placing this burden on
humans would not be human-centered in practice. Rather, human-centered design
should work to ensure that teaming influence is intelligently and positively designed
to ensure that the resulting social influence follows. Importantly, doing so will require
an intentional focus on humans as this dissertation demonstrates how “positive” can
be a subjective measure in teams as AI will need to benefit both team goals (such as
task completion) and individual goals (such as learning or enjoyment).
Based on the above considerations and the corpus that is this dissertation, the
following answer to RQ1 can be synthesized:
When conditions are met, AI teammate social influence will naturally be-
268
come direct and indirect social influence, but the quality of this social
influence will be dictated by the quality of the AI teammate’s teaming in-
fluence.
7.1.2 Research Question 2
How do varying amounts of AI teammate teaming influence mediate hu-
mans’ perceptions and reactions to AI teammate social influence?
It is important to acknowledge that the placement of AI teammates into teams
is going to be accompanied by a shared goal or resource that they are tasked with
contributing to, which this dissertation labels a teaming influence. Ultimately, this
teaming influence will more often than not be decided before an AI teammate is
assigned to a team as the specific knowledge of an AI teammate means that their
assigned tasks are going to be used to help train said teammates prior to teaming. As
such, the three studies conducted by this dissertation demonstrated that the amount
of teaming influence given to an AI has substantial effects on human perceptions and
performance. In doing so, this dissertation has provided a clear understanding of how
teaming influence is a design consideration that can be manipulated, meaning that
resultant social influence can also be somewhat designed.
Regarding Study 1, teaming influence was manipulated by changing AI team-
mate behavior to more often manipulate a shared resource. Large and small amounts
of this teaming influence were shown to negatively and positively impact human
teammate performance. However, perception research was inconsistent as partici-
pants often had highly personal reasons for their teaming influence preference, such
as a desire to learn or win. Most interestingly, however, is how Study 1 demonstrates
that humans adapt around these changes by augmenting their own performance in
269
a dynamic way based on their personal goals (i.e. learning or winning) and their
prior perceptions of AI (i.e. its capabilities). Additionally, Study 1 showed that hav-
ing an AI teammate decrease their teaming influence as the teaming task progresses
can provide an example of high performance while also allowing humans the space
to grow, which in turn allows humans to learn and improve. Conversely, increasing
said teaming influence over time during a teaming task can discourage humans from
improving and in turn stagnate if not harm their own personal performance. Unfor-
tunately, given the impending rise of AI and its teaming influence, the latter is more
likely. Thus, answering RQ1 from the perspective of Study 1 yields the conclusion
that researchers can directly benefit the performance of human teammates, but they
are going to need to especially consider the goals of said human teammates to do so.
Examining Study 2, this dissertation found significant impacts of changes in
teaming influence when said teaming influence was operationalized as a shared work-
load. Both surveys conducted revealed that the teaming influence assigned to a
potential AI teammate has direct impact on multiple perceptions humans form, in-
cluding the critical factor of perceived adoption likelihood. Importantly, while Study
1 and Study 3 found qualitative but know quantitative linkages between teaming
influence level and acceptance, these results directly demonstrated that the teaming
influence of an AI teammate can directly impact perceived acceptance, with larger
amounts of teaming influence negatively impacting perceived acceptance. Ultimately,
achieving this understanding was one of the core problem motivations of this disser-
tation. Thus, using Study 2, one can better promote the acceptance of AI teammates
through the manipulation of teaming influence. Specifically, Humans should equally
share the workload with AI teammates across singular tasks, but humans should have
predominant levels of teaming influence when sharing multiple tasks.
Additionally, Study 3 demonstrated that changes in influence also matter when
270
Figure 7.2: RQ2 Study Relationships
said teaming influence is operationalized by the number of AI teammates on a team.
Specifically, variations in teaming influence via population variations create social
influence that impacts the interdependence humans form with their AI teammates.
Highly imbalanced human-AI ratios that favor AI teammates tend to create a type
of in-group out-grouping where the human-AI team as a whole act in a highly in-
terdependent way but ultimately displays behavior and perceptions that potentially
ignore AI teammates. Applying this RQ outside of this context, this answer has
critical considerations for teams that employ large amounts of AI teammates. While
the concept of information overload is not unheard of in these domains, this study
demonstrated how increasing the number of AI teammates can rapidly create infor-
mation overload. However, this overload is also accompanied by a greater level of
behavioral interdependence, meaning that a greater number of AI teammates would
more strongly benefit tasks that require behavioral interdependence, but a team with
fewer AI teammates would more strongly benefit tasks that are more heavily based
on information exchange and processing.
Examining the dissertation as a whole, one can see that the social influence of
AI teammates is linked in real-time to their teaming influence. Whether one examines
the effects of this teaming influence before (Study 2), during (Study 2 & 3), or after
271
(Study 2 & 3) interaction, humans create strong perceptions and behaviors based on
how active of a role AI teammates play in their teams. Moving forward, this find-
ing means that the design of AI teammates cannot be shallow and one-dimensional.
Rather, designers have to consider how a change in an AI teammate (i.e., their behav-
ior, task-load, and/or population distribution within a team) in turn changes their
teaming influence, subsequently changing their social influence, and finally leading
to long-term changes in the behavior, acceptance, and perception human teammates
have.
Given these findings, RQ2 can be directly answered through the following
conclusion:
The teaming influence of AI teammates is a critical and repeated consid-
eration of humans, and these considerations happen when influence varies
based on changes in behavior, shared workload, and population.
7.1.3 Research Question 3
How accepting are humans to AI teammate teaming and social influence,
and can AI teammate design increase acceptance?
Ultimately, it’s important to remember that teams are complex interactions
between individuals with different pasts and experiences. In turn, for human-AI team-
ing to be effective, it must simultaneously consider these experiences and the design
of technology, which is a process known as human-centered design. To ensure human-
centeredness, RQ3 provides an explicit linkage between the design of AI teammate
teaming and social influence, the prior perceptions and experiences humans have, and
the acceptance of AI teammates. Understanding this linkage ensures that the path
toward AI teammate social influence and acceptance is relatively frictionless.
272
Within Study 1, the most interesting finding was the uniquely personal reasons
for accepting AI teammate teaming and social influence, such as past experience.
For instance, one participant had a broader acceptance of AI due to experiencing a
robot-assisted surgery, and this general acceptance in turn benefited their acceptance
of their AI teammate’s teaming and social influence. Surely this is great news for
the future acceptance of AI teammates - all we have to do is make any potential
human teammates undergo robot-assisted surgery first and then we are good to go! In
reality, of course, that would be neither ethical nor practical, but it does demonstrate
the unique power that positive prior experience working with an AI (e.g., successful
robot-assisted operation) can have on the perceptions human teammates form about
their prospective, current, and future AI teammates. Unfortunately, prior human
experience with AI is not something that practitioners can reasonably totally control
in every situation, thus the findings of Study 1 demonstrate that there are fantastic
boons to acceptance created by past experience, but these boons may not be consistent
enough to the only avenue we pursue.
While Study 1 showed just how strong individual experiences can be, the
method used to identify these experiences was extremely labor-intensive (qualitative
interviewing). Thus, to reach a broader audience more efficiently, Study 2 examined
common individual differences measures (i.e. personality scales or fears of missing
out) to see if any of these perceptions were strong indicators correlated with humans
more strongly accepting AI teammates and their teaming influence. However, Study
2 showed that only a couple of measures (general computing capabilities and cynical
attitudes towards AI) showed any correlation with the acceptance of AI teammates.
Thus, while individual differences are impactful as evidenced by Study 1, general
individual difference measures may be a less fruitful endeavor when determining if a
human will be open to accepting an AI teammate. Conversely, Study 2 demonstrated
273
Figure 7.3: RQ3 Study Relationships
that AI teammate designs, such as changing their identity or emphasizing their capa-
bilities, can provide general and consistent boons to the acceptance of AI teammate
teaming and social influence. Moreover, Study 2, which examined perception prior
to interaction, showed that AI teammate design should not just be a consideration
during interaction but also prior. As such, this dissertation provides multiple key
design recommendations that can be implemented before (Study 2), during (Study 1
& 3), and after (Study 1 & 3) AI teammate integration to promote greater levels of
acceptance in concert with the impacts created by humans’ individual experiences.
Examining Study 1 and Study 2 in combination, one can see that individual
differences and AI design will both simultaneously impact the perception and accep-
tance of AI teammates. However, the individual differences most impactful on these
perceptions are going to be those of lived experiences, which are often unique from
human to human. Fortunately, AI teammates can be directly designed to encourage
the acceptance of their teaming influence, through both an increase in coworker en-
dorsement, control, and first-hand observation. Ultimately, the combination of these
two factors (lived experience and design) provides the following short-to-long-term
plan. First, practitioners should identify and target first adopters of an AI teammate
274
based on their lived experiences, such as those who have already worked with AI
systems on a regular basis. Then, one should ensure that these first adopters begin to
socially influence their broader organizations to adopt the AI teammates. While this
high-level plan is not holey unique to AI teammate acceptance, the methods used to
enact this plan, which is discussed throughout this dissertation as design recommen-
dations are. For instance, the use of team demonstration periods, an emphasis on
control, or the creation of new human-AI teams should become critical components
of this plan.
Given the findings of Study 1, Study 2, and this dissertation as a whole, the
following answer has been created to RQ3:
Personal human experience can be a large determinant in the acceptance of
AI teammate teaming influence in individuals, but AI teammate design can
provide consistent and widespread boosts to the acceptance of AI teammate
teaming influence.
7.1.4 Research Question 4
Does the role of AI social influence change in teams with existing human-
human social influence?
Compared the other RQs of this dissertation, RQ4 is highly unique in that it
looks to take our prior understandings of RQ1, RQ2, and RQ3 and reevaluate them
outside of dyadic contexts and inside a complex team with multiple humans and AI
teammates. While this was somewhat done in the previous sections when discussing
the answers provided by Study 3, it is critical that an explicit understanding of AI
teammate social influence is made in light of human-human social influence, due to
the likely prevalence of both of these social influences in the future. Largely, these
275
updated answers are driven by Study 3, but some small inferences can be made by
Study 2.
First, when extending the answer to RQ1, Study 3 showed us that teaming
influence is still able to become social influence in human-AI teams that have human-
human social influence. In fact, participants in Study 3 not only individually adapted
around the AI teammate, but both teammates at times adapted as a group around
the AI teammates. Given this finding, the transition from teaming to social influence
in multi-human teams can happen both directly and indirectly. However, humans also
had social influence on other human teammates. For instance, humans still learned
from each other and adapted their play style to benefit both their AI teammate and
their human teammate in Study 3. Given this, we know that individual humans can
simultaneously convert the teaming influence of AI teammates and human teammates
into social influence.
In regard to the variation of teaming influence and its impacts on human
perception, the most interesting extension comes from both Study 2 and Study 3.
First, Study 2 saw participants strongly feel that their teammates would perceive
them as less helpful when an AI teammate had a large amount of teaming influence.
This result demonstrates that humans feel that they will be directly compared to the
AI teammates that they share workloads. This is a critical finding as it shows that
human-human is not just a concern of this dissertation’s RQ2 but also of real-world
humans as well. From Study 3, participants perceive a greater level of conflict after
they trained with each other. In other words, pre-existing human-human influence
creates pre-existing perceptions and behaviors, and AI teammate social influence can
impact these established norms, creating perceived conflict. This finding is critical in
extending RQ2 as it shows that human-AI teaming perceptions are not just driven
by variations in an AI teammate’s teaming influence, but also by human teammate
276
Figure 7.4: RQ4 Study Relationships
teaming influence. Additionally, increasing the number of AI teammates in Study
3 had an interaction effect with teammate identity, which means variances in an AI
teammate’s teaming influence impact the perceptions of human teammates differently
than the perceptions of AI teammates.
Finally, when extending RQ3, Study 3 provides the most interesting consider-
ation. Specifically, Study 3 saw participants form fairly strong expectations for their
AI teammates. When understanding the relationship between human-human and AI-
human social influence, this finding shows that the teaming and social influence of AI
teammates is often used to either confirm or overcome the expectations humans have.
In other words, Study 3 clearly shows that the perception and social influence created
by AI teammates is a direct result of a linkage between their expectations and the AI
teammate’s teaming influence. From this example, one can see that the expectations
are not created by interactions with their actual AI teammate but rather by a differ-
ent AI system altogether. But, this is not the case for human teammates. Humans in
Study 3, who often did not know each other prior, did not form strong expectations
for one another. As such, the teaming influence exerted by humans was the primary
way in which humans formed perceptions of each other in these tasks. Thus, when
updating our understanding of RQ3, one can see that individual differences appear
to have a much stronger linkage with the teaming influence of AI rather than the
277
teaming influence of humans.
Given the above, RQ4 can have the following answer from Study 2 and Study
3 synthesized:
The social influence of AI teammates and human teammates can coexis-
tence in teams with multiple humans, but they both have different impacts
on human-human and human-AI perception and performance outcomes.
278
7.2 Contributions of the Dissertation
The three studies contained within this dissertation stand to provide signifi-
cant contributions to research fields and society. As this dissertation is scoped within
the observation of human-AI teams, that domain stands to receive the greatest con-
tribution. However, this dissertation’s commitment to the creation of high quality
design recommendations and inclusion of societal context ultimately yields contribu-
tions to the fields of human-centered AI and society in general. The closing chapter
of this dissertation proposal discusses these contributions, both from the perspective
of the completed studies and the dissertation as a whole.
7.2.1 Contributions to Human-AI Teaming
The field of human-AI teaming is rapidly developing, and a multitude of con-
tributions are still needed to ensure the domain is applied to the real world. Firstly, as
repeatedly stated by this dissertation, exploring social influence in human-AI teams
ensures its importance is not lost when transitioning from human-human teaming re-
search to human-AI teaming research. Secondly, while this dissertation examines the
critical teaming concept of social influence, the understanding created by this disser-
tation further benefits other fields of human-AI teaming. Specifically, the realization
of how and why AI teammate social influence changes existing human behavior and
perception will further contextual the sub-fields of trust, coordination, and communi-
cation within human-AI teaming. For instance, coordination research greatly benefits
from this dissertation as these studies show how, when, and why humans become or
already are receptive to AI teammate social influence.
Moreover, as the field of human-AI teaming within society transitions from
theory to practice, understanding the factors that facilitate AI teammate acceptance
279
need to be further understood, and this dissertation does just that by examining the
mediating impact of AI teammate social influence on AI teammate acceptance. Thus,
the results of this dissertation allow researchers to not only answer the question of
“How will AI teammates influence teaming?”, but also “How will human teammates
let AI teammates influence human-AI teams?” The following provides pointed con-
tributions from each study and the dissertation as a whole based on the above.
7.2.1.1 Contributions from Study 1
Firstly, Study 1 provides a foundational exploration of how human teammates
perceive, interpret, and change when experiencing AI teammate social influence. This
contribution cannot be understated as these foundational understandings are pivotal
in understanding how humans want to and are going to act when an AI teammate is
placed alongside them as a teammate. Unfortunately, Study 1 showed how extremely
large imbalances in teaming influence can cause a noticeable level of disruption to-
wards humans’ goals, in turn harming perception. Fortunately, humans show amazing
levels of adaptability in observing, learning, and reacting to their AI teammates when
their levels of teaming influence are manageable and justifiable, even if that adapta-
tion does not benefit their individual performance.
Secondly, Study 1 provides an exploration of how humans are going to react
to the increases in AI teammate teaming influence that is driven by the behavior of
the AI teammate. Importantly, Study 1 identifies the significant impact changes in
AI teaming influence can have on human teammate performance and improvement.
Moreover, Study 1 identifies several factors that contribute to whether or not humans
see an AI teammate’s level of teaming influence as acceptable, with the perception
of alignment with personal motives and goals being the most critical. This finding
is crucial for human-AI teams as the variety of personal goals within existing teams
280
is incredibly high, meaning the presentation and application of AI teaming influence
may need a more personal touch.
Finally, the design recommendations provided by Study 1 provide a means
of actionable improvement for AI teammates and human-AI teams. Specifically,
Study 1’s design recommendations, including the use of shadow periods and over-
ride mechanisms, provide a means of enabling humans to better adapt to their future
AI teammates. Ultimately, the implementation of these recommendations will help
ensure humans actually allow AI teammates to be a meaningful part of their teams,
rather than just a tool that is put away and ignored. Thus, the above two in addition
to all design recommendations created by Study 1 provide a means of enabling the
implementation of AI teammates into the real world.
7.2.1.2 Contributions from Study 2
In regard to Study 2, human-AI teaming research has long focused on ex-
amining how the design of human-AI teams impacts human interaction and in turn
perception. Study 2 provides a clear demonstration of how the design of an AI
teammate’s teaming influence over a shared task, identity, and presented capabilities
directly impact acceptance and perception. The design of AI teammates is a funda-
mental goal of this dissertation, and future human-AI team researchers need to better
incorporate the design recommendations of Study 2 (i.e. demonstration events and
initial influence limitations) to ensure humans enter future research and teams with
optimism towards AI teammates.
Additionally, Study 2 explores the somewhat ignored concept of individual
differences in human-AI teams. While research acknowledges the importance of in-
dividual differences in teamwork, human-AI teaming research often views these in-
dividual differences as covariates to control for and not factors to design for. Study
281
2 provides one of the first intentional and direct explorations of the role individual
differences play in the acceptance and perception of AI teammates. While the results
of Study 2 did not find extremely strong relationships between these differences and
adoption, these results coupled with the results of Study 1 paint a vivid picture of the
importance of individuals’ experiences when evaluating AI teammates. The explicit
efforts put forth by Study 2 should continue outside of the field of teaming influence
to better understand the role of individual differences in creating perceptions for AI
teammates.
Additionally, Study 2 provides one of the first explorations of the perceptions
humans can form for AI teammates before interaction. When considering the results
of Study 3, which show the importance of expectation, this contribution cannot be
understood. As both researchers and practitioners, human-AI teaming should ensure
that humans enter their human-AI teams with positive expectations towards their
human and AI teammates. Additionally, Study 2 the perceptions currently being
examined by human-AI teaming research by exploring job security and acceptance
perceptions. As AI teammates begin to enter society and real-world jobs are placed
alongside AI teammates, humans will need to feel psychologically safe alongside said
AI. As such, the effort put forth by Study 2 and the large significance found be-
tween influence level and job security should encourage human-AI team researchers
to continuously examine factors outside of performance. Thus, coupling the unique
measures examined and the stage of teaming these measures are observed at, Study
2 provides a fundamental demonstration that research needs to somewhat broaden
their gaze to include a multitude of different perceptual measures not only after but
also before interaction.
282
7.2.1.3 Contributions from Study 3
For Study 3, the implementation of AI teammates will most likely occur in
existing human-human teams as the initial goal of the technology will most likely be
to support existing teams. First, Study 3 provides the first and only comparison of
AI teammate social influence in an environment where human-human influence also
exists. Moreover, the effects found by study 3 demonstrate that AI teammate social
influence changes form and nature when in teams with multiple humans as it can
impact humans both directly and indirectly and it can change humans’ behaviors
in different ways than it changes their perceptions. Moving forward, researchers will
need to be aware of these potent direct and indirect social influences of AI teammates.
Second, Study 3 better serves to advance the human-AI teaming research
community by further providing research that does not simply observe dyads, which
are often less complex than other teams. Importantly, dyadic limitations are often
seen in research and while necessary at times should not become the only consideration
of research as they represent a unique type of team. Moreover, in human-AI teaming,
these dyads can begin to look less like teams and more like simple interactions without
strong levels of interdependence. Specifically, Study 3 demonstrates how different a
concept can look when examined in a dyad team compare to even a three-person
team. For instance, if one only looks at Study 1 then one would see that humans
almost indiscriminately adapt to AI teammates, but in reality Study 3 shows us
that humans simultaneously adapt to both their human teammates and their AI
teammates based on a variety of factors, including teaming influence and expectation.
As such, continuing to pursue multi-human research, while difficult, is critical to the
external validity of the research.
Study 3 also directly compares human and AI perceptions with the goal of ex-
283
amining them together and not in isolation. This is a critical contribution to human-
AI teaming research as empirical research should simultaneously be concerned with
human-human relationships and human-AI relationships. Study 3 provides a means
of identifying, quantifying, and contextualizing the impact AI teammate teaming
influence will have on human-human relationships in addition to team-wide relation-
ships. For human-AI teams to work in the real world, impacts on human-human
relationships must be fully understood, and Study 3 provides a critical first step
in achieving that understanding by demonstrating the conflict that AI teammates
can cause between humans as well as the perception gap between human and AI
teammates. Moving forward, the field of human-AI teaming will be better served to
understand how the potential integration and design of AI teammates will mediate
the interactions humans have.
7.2.1.4 Contributions of Dissertation as a Whole
This dissertation provides a key contribution to the field of human-AI teaming
in that the wealth of knowledge created represents the first explicit exploration of so-
cial and teaming influence in human-AI teams. With this knowledge, other teaming
concepts, such as team cognition, coordination, and even communication stand to
increase in fidelity as the community will better understand the reactionary effects
that these concepts can have on humans. For instance, while the direct impacts of im-
provements in communication design can be observed, the results of this dissertation
demonstrate that changes in human-AI communication design will in turn change
human behavior due to social influence stemming from the teaming influence of com-
munication. When unobserved, these changes would ultimately impact human-AI
teams in more indirect ways, and these impacts may go unnoticed due to being indi-
rect. However, these changes could create substantial changes to human perception,
284
especially when the concept of acceptance is taken into consideration. Thus, per-
forming human-AI teaming research in light of the contributions of this dissertation
will enable a greater understanding of reactive effects, such as behavioral changes, in
human teammates that are driven by a myriad of design changes in AI teammates.
7.2.2 Contributions to Human-Centered AI and AI Accep-
tance
While human-AI teaming serves as the most targeted point of research contri-
bution for this dissertation, the perspective this dissertation takes ultimately creates
contributions to the field of human-centered AI and AI acceptance as well. While AI
teammates stand as a highly capable application of AI technology, they are not going
to be the only applications of AI, meaning it would be ideal for recommendations
and lessons from research in human-AI teaming to extend into the broader domain
of human-centered AI.
7.2.2.1 Contributions from Study 1
Study 1’s foundational exploration of factors that facilitate the perception of
AI teammate social influence can arguably extend our current iteration of the TAM.
As a reminder, this model often dictates that a multitude of individual differences and
technology design influence two perceptions, perceived utility and ease-of-use, which
in turn impact one’s intent to use a technology [452]. Study 1 demonstrates why this
understanding is going to need to be updated for two reasons. First, AI teammates
are not only technology but also a teammate, and teammates are not “used” but
collaborated with, as illustrated by how humans adapted based on their interactions
with their AI teammates. Second, AI teammates can use social influence to directly
285
change human behavior, and the TAM is mostly unidirectional and does not account
for the technology in question interacting as an equal partner. Additionally, in re-
gard to general human-centered AI, Study 1 has extreme relevance to a large degree
of autonomous vehicles and the potential levels of autonomy because humans showed
strong apprehension about not being able to control AI. This has extreme relevancy
for the field of autonomous vehicles amongst others as the highest target level of
autonomy actually takes away this semblance of control (i.e. the steering wheel). Al-
though researchers may already anticipate that taking away the steering wheel would
be uncomfortable for humans [177], Study 1 raises the consideration that humans
may not even want that to be the target design of the technology. Thus, fields within
human-centered AI and AI acceptance may need to reevaluate what humans actually
want from AI technology.
7.2.2.2 Contributions from Study 2
Study 2 stands to provide an additional extension of existing TAM theory into
the domain of human-AI teamwork, which is a first. Incorporated in this exploration
is a detailing of how the individual differences commonly seen to impact general tech-
nology acceptance impact AI teammate acceptance as well as how the presentation of
AI teammates can benefit perceptions associated with technology acceptance. For the
former, Study 2 demonstrates that the common perceptions that impact general tech-
nology acceptance, such as perceived computer capability, have direct relationships to
human-AI teaming. Moreover, while the TAM mostly examines individual differences
associated with technology, Study 2 examined individual differences associated with
teamwork, such as FOMO or leadership motivations, with the goal of expanding our
understanding of preexisting perceptions that impact technology acceptance. Finally,
Study 2 not only proposed but evaluated specific design recommendations to examine
286
how well they improved the TAM-related perceptions, such as perceived utility, that
humans formed for AI teammates. Doing so further iterates the importance of de-
signing for perceived utility not just during but also before interaction. In doing the
above, Study 2 intentionally works to extend the human-centered topic of technology
acceptance to apply to more complex iterations of AI.
7.2.2.3 Contributions from Study 3
Study 3 provides a critical exploration of how human-human relationships are
going to be mediated by AI teammates. While this contribution is most pertinent to
human-AI teaming, as AI technology becomes more integrated into our daily lives,
it has a greater potential to socially influence human-human relationships. For in-
stance, Study 3 demonstrated that AI teammates can create conflict between human
teammates, especially when these teammates have prior experience together. Thus,
creating an understanding of these impacts in human-AI teams provides a novel start-
ing point for human-centered AI researchers to better explore human-human relation-
ships. Additionally, a strong gap between human-human and human-AI perception
has been empirically shown, and these perceptions may also exist in domains where
humans interact with both AI and other humans. Thus, rather than simply limiting
observation to the perceptions of AI, Study 3 provides the empirical justification for
always measuring human-human perceptions in concert with human-AI perceptions in
human-AI teaming and human-AI interaction studies to ensure human-centeredness.
In turn, the collection of these two data sources will allow researchers to (1) ensure
that human-human perceptions are only benefited by AI teammates and (2) ensure
that AI teammates are meeting the expectations humans have for their other team-
mates.
287
7.2.2.4 Contributions of Dissertation as a Whole
This dissertation provides an effective guide for AI researchers on how to sys-
tematically and empirically extend the current understanding of technology accep-
tance to accommodate novel implementations of AI technology. Despite the fact
that the findings of this research may not be entirely applicable to every iteration of
AI technology, the process this dissertation undertook is applicable. The procedural
narrative of this dissertation provides a story of how one should first research the foun-
dational components of a novel AI technology (Study 1), connect said components
to the concept of acceptance (Study 2), and finally widen one’s scope to understand
the impacts technology has in more complex environments (Study 3). Furthermore,
this empirical process also benefits the design of future processes in human-centered
AI research as the TAM may need to be extended to better accommodate other
implementations of AI. The design recommendations provided throughout this dis-
sertation can continue to benefit both human-AI teams and human-AI interactions
in environments where AI is poised to have teaming influence. Thus, the above con-
tributions provide a robust toolkit for better understanding the acceptance of novel
AI implementations.
7.2.3 Contributions to Society’s Interactions with AI
Given the prevalence of teams and teamwork within modern society, it is
important to note that the prior contributions discussed also stand as contributions
to future societies. However, the rising prevalence of AI in society also provides an
opportunity for this dissertation to contribute.
288
7.2.3.1 Contributions from Study 1
While Study 1 explores the social influence of AI teammates during normal
teaming interactions, the results of this study open up the door to exploring social
influence driven by more than team and task interaction. Among these types, ma-
nipulation and persuasion drove by information sharing or communication stand as
potentially problematic impacts on society. Given the results of Study 1, the use of
AI systems as propaganda agents that spread misinformation on social media could
be highly influential on humans as they may observe, learn, and adapt from the infor-
mation these propaganda agents provide them. Unfortunately, if these propaganda
agents are able to identify and appeal to personal motives, they may be able to change
human behavior by negatively impacting the societies they belong to. While this is
a potentially negative use of AI, it is an unfortunate reality that already exists and
needs to be considered [280, 467]. Fortunately, the results of Study 1 provide an un-
derstanding of the social influence these agents have on humans, and this knowledge
is the first step into both researching this manipulation specifically and designing
solutions to prevent it.
7.2.3.2 Contributions from Study 2
Study 2 intentionally contributes to society by (1) expanding our considera-
tions of what it means for a teammate to be highly perceived and (2) providing an
intentional effort to explore and emphasize the importance of individual differences.
For contribution (1), the perceptions of AI teammates are often tied heavily to per-
ceptions that facilitate teamwork, such as trust or performance. However, humans are
going to care about more than just performance and trust, and Study 2 demonstrates
that factors such as job security will play a potentially strong role in the perception
289
of the design of AI teammates. As such, Study 2 states that these perceptions should
become consistent considerations of human-AI teaming research to ensure humans are
not complacent but comfortable with their teammates. For contribution (2), while
Study 2 found minimal results regarding individual differences, the intentional effort
made by Study 2 along with the experiential individual differences found in Studies
1 and 3 helped promote their exploration of human-AI teamwork research. Impor-
tantly, individual differences research should not stop at Study 2, but research should
rather continue to explore and identify the potential perceptions humans have prior
to interaction.
7.2.3.3 Contributions from Study 3
Study 3 not only observes the concept of human-human perception but also ex-
plores how human-AI teammates will impact these perceptions. This is critical as the
prevalent existence of human-human perception in society should be preserved, and
while some technologies have impacted these relationships in the past, AI teammates
should not be one of those technologies. While AI systems stand to benefit team
performance, research must ensure that AI is also benefiting humans’ interactions in
these teams, as these benefits could lead to long-term satisfaction and performance
gains in humans. Thus, the results of Study 3 place humans at the forefront of human-
AI teaming and human-centered AI research with the understanding that technology
advancements are not worth damaging those relationships. Moreover, Study 3 demon-
strates that human-AI relationships are no substitute for human-human relationships
as a large, innate gap exists between the perceptions humans have for AI and humans.
Demonstrating the goal of AI teammates should not be to replace existing human-
human interactions but rather augment and add to them. Thus, Study 3 serves to
document the important preservation of the human connection, its preference, and
290
its value in society, and this connection should never be in question when integrating
new technologies.
7.2.3.4 Contributions of Dissertation as a Whole
As a whole, this dissertation further reiterates the importance of AI technology
being human-centered, which will ultimately ensure AI is a force for good in society.
AI is a neutral technology; however, the purpose we prescribe that technology as well
as the designs we as researchers and designers create are anything but neutral. The
potential for bad actors to manipulate the potential social good of AI technology is all
too real. However, the potential for researchers to do the opposite and intentionally
design AI teammates to benefit humans and society is also just as likely. While
this dissertation cannot guarantee that every implementation of AI will be done for
social good, this dissertation’s commitment to human-centered design and AI for
social benefit will serve as a persistent voice in support of AI’s potential benefit to
society, which is done through designing AI to have positive social influence, preserve
human-human relationships, and prioritize non-performative perceptions such as job
security. Along with other research, potential bad actors will hopefully be drowned
out in favor of a future where AI ultimately serves as a benefit for humans.
291
7.3 Future Work
While each study within this dissertation provides explicit detailing of how
future studies can and should be performed, it is important to outline where this body
of work goes next. Specifically, the continuation of the two problem motivations of
this dissertation (the identification of social influence and its consideration in light of
acceptance) serves as the future work following this dissertation. This work provides
foundational research to the field of AI teammate social influence and AI teammates
acceptance, but the problems posed by this dissertation are living problems that
research will need to continuously examine and iterate upon. While this work provided
detailed answers regarding the social influence of AI teammates and its impacts on
acceptance, those answers are ultimately foundational in the grand scheme of human-
AI teamwork.
For social influence, research into the concept in teams is decades old, and
it would be impossible for a single dissertation to finalize the existence of said so-
cial influence. Future work must explore how various team structures, contexts, and
additional AI teammate behavioral designs mediate the social influence AI team-
mates have. While the answers provided by this work provide a foundation for these
explorations, every team is different and their social influence can be different too.
Additionally, the social influence within this study simply explored the social influ-
ence that stemmed from natural interaction, but not intentional persuasion. There
is still a wealth of research that must be done to examine the direct persuasive ca-
pabilities of AI teammates in human-AI teams. Intentional social influence is poised
to provide a large degree of social good and bad, and research must rapidly pursue
the examination of this persuasion because it has played a vital role in past teams.
The results of this dissertation are ultimately supposed to exist as a living entity
292
that continuously grows and learns and helps future researchers better understand
the complex and powerful nature of AI teammate social influence.
In regard to AI teammate acceptance, this dissertation was able to empirically
link the concepts of AI teammate teaming influence and AI teammate acceptance,
which allows connections to be made between social influence and AI teammate accep-
tance. However, this linkage was never meant to be the end-all-be-all for AI teammate
acceptance. Rather, this dissertation’s goal in evaluating AI teammate acceptance
was to bring the topic into the conversation, as it has been woefully underserved in
recent research. While this dissertation extends our understanding of technology ac-
ceptance to include AI teammate social influence, increased social influence is not the
only novelty AI will gain when transitioning from tool to teammate. Various other
factors, such as that team cognition, are poised to help revolutionize the design of
AI teammates. This dissertation demonstrates that those future endeavors into AI
teammate design will impact the acceptance of AI teammates, for better or worse.
While future work should examine the holistic acceptance of AI teammates, doing
so will be somewhat of a moving target. Rather, human-AI teaming should continue
to explore the necessities of human-human teamwork that are required to advance
human-AI teamwork and also examine how these factors have a secondary impact on
acceptance, similar to the process taken by this dissertation. In doing so, human-AI
teaming research will ensure that an understanding of AI teammate acceptance is
created by a merger of diverse research, by diverse individuals, on diverse topics.
293
7.4 Closing Remarks
To be fully transparent, going into this dissertation, I do not think I could
have pictured that this is how it would have finished. It is extremely interesting to
think back to all of the iterations required just for the ideas stage alone. Ultimately,
settling on the problem motivation for this work, which merges social influence and
acceptance, became the most interesting idea as it allowed me to answer fundamental
questions that interest me. Specifically, I always want to know if humans are going
to allow technology into their lives, and why the design of certain technologies may
either benefit or harm said acceptance.
Taking an example from outside of human-AI teamwork, one can examine
smartphones, VHS players, or even general computers. Each of these devices had to
somewhat fight uphill battles in garnering acceptance, but they ultimately prevailed.
At the end of the day, these technologies also prevail not due to a multitude of benefits
but often one basic reason that people found incredibly important. For instance, VHS
players were lighter and recorded for longer than the Betamax, and both of these
simplistic features ultimately led to victory. Allowing a VHS player into your life is
one thing, but allowing a teammate is a whole other challenge. Given my experience
in learning about teamwork, I felt that AI teammate social influence and its intelligent
design had the potential to be this deciding factor. While this dissertation would not
be able to actually determine that, I had to start somewhere in creating foundational
knowledge around the topic to pursue that goal, which is the goal of this dissertation.
Ultimately, if humans did not respond to AI teammate social influence, then I feel it
would have been a bad omen for AI teammate acceptance. However, that was not the
case, and AI teammates’ social influence thrived. Yes, there are going to be growing
pains, but this dissertation hopes to identify and provide solutions for those pains.
294
As an aside, I think it is also important to acknowledge all that I have learned
through this process, which I think is an understated contribution to the dissertation
(i.e. the contribution of this work to myself). This is by far the largest and hardest
accomplishment I have ever been able to achieve in my life. I have learned more about
AI, human-AI teamwork, and people than I thought I ever would, and I am not done
yet. Moreover, I have learned to be a researcher and not someone who simply does
research. Ultimately, I set out a problem I wanted to solve, and I used research to
solve it.
Moving forward, I know the work I have done here will live on and help to
benefit not only future researchers but also future teams. For example, I did not set
out to create research that maximized the amount of social influence AI teammates
could have. After Study 1, I could have worked to see how influential I could make
teammates, but I felt it was more important to identify where humans wanted the
needle on that gauge to land. I think using everything I have been able to put forth
with this document, AI teammates are poised to have a bright future with a type of
social influence that humans will welcome into their teams. At the end of the day,
all I can hope from my work is that it helps humans allow AI teammates to have a
fraction of the social influence that potatoes have.
295
Appendices
296
Appendix A Surveys
* Denotes Reverse Coding
Study 1 Demographics
General Demographics Survey
Enter your Age: (Number Entry)
Specify your identified gender: (Male, Femal, Non-binary/third gender, Prefer not to
say, Preferf to specify)
Please Specify your ethnicity: (Caucasian, African-American, Latino or Hispanic,
Asian, Native American, Native Hawaiian or Pacific Islander, Prefer to Specify, Prefer
not to say)
Is English your first language (yes, no)
Current level of education: (High School Diploma, Some Undergraduate, Finished
B.S. or B.A., Some Graduate School, Finished Masters Degree, Finished Ph.D.)
Video Game Experience
How much experience do you have playing video games?(None at all, Some, A good
amount, A lot)
Have you ever played the video game “Rocket League”(Yes, No)
How often do you play Rocket League(Never, Not in a long time, A few times a year,
A few times a month, At least every week, Almost every day).
What platform do you play rocket league on the most?(I don’t play rocket league,
Playstation, Xbox, Nintendo Switch, PC)
Do you use a controller or keyboard and mouse to play Rocket League(I don’t play
Rocket League, Keyboard and Mouse, Controller).
297
How would you rate your skill at Rocket League(I don’t play Rocket League, I’m not
very good, I’m decent, I think I’m pretty good, I’m an expert).
Table A.1: Study 1 Demographics
Negative Attitudes Towards Agents
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Agree Strongly Disagree).
I would feel uneasy if Artificial Intelligence really had emotions.
Something bad might happen if Artificial Intelligence developed into living beings.
I would feel relaxed talking with Artificial Intelligence.*
I would feel uneasy if I was given a job where I had to use Artificial Intelligence.
If Artificial Intelligence had emotions, I would be able to make friends with them.*
I feel comforted being with Artificial Intelligence that have emotions.*
The word “robot” means nothing to me.
I would feel nervous operating a robot in front of other people.
I would hate the idea that Artificial Intelligence were making judgments about things.
I would feel very nervous just standing in front of a robot.
I feel that if I depend on Artificial Intelligence too much, something bad might happen.
I would feel paranoid talking with a robot.
I am concerned that Artificial Intelligence would be an influence on children.
I feel that in the future society will be dominated by Artificial Intelligence.
Table A.2: Negative Attitudes Towards Agents Survey
298
Disposition to Trust Artificial Teammates
Indicate the degree to which you agree with the following statements (5-point Likert
scale, Strongly Agree Strongly Disagree).
I usually trust machines until there is a reason not to.
For the most part, I distrust machines.*
In general, I would rely on a machine to assist me.
My tendency to trust machines is high.
It is easy for me to trust machines to do their job.
I am likely to trust a machine even when I have little knowledge about it.
Table A.3: Disposition to Trust Artificial Teammate Survey
Teammate Performance Survey
Please answer the following questions regarding your perceptions of the last teammate
(3rd game) you worked with. There are no wrong answers (5 point Likert scale,
Strongly Disagree Strongly Agree).
The artificial teammate I worked with in the last game:
did a fair share of the team’s work.
made a meaningful contribution to the team.
communicated effectively with teammates.
listened to what teammates had to say about issues that affected the team.
monitored whether the team was making progress as expected.
helped the team plan and organize its work.
299
believed that the team produced high quality work.
believed that the team should achieve high standards.
completed tasks that he/she agreed to complete with minimal assistance from team
members.
has the skills and abilities that were necessary to do a good job.
respectfully voiced opposition to ideas.
was actively involved in solving problems the team faced.
Table A.4: Teammate Performance Survey
Teammate Trust Survey
Please answer the following questions in regards to the artificial teammate you worked
with in the last (3rd) game (5-point Likert).
Did you trust the autonomous agent that you worked with?
Did you feel confident in the autonomous agent you just worked with?
Did you feel that you had to monitor the autonomous agent’s actions during the
game?*
Did you feel that the autonomous agent had harmful motives in the game?*
Did you feel fearful, paranoid, or skeptic of the autonomous agent during the game?*
Did you feel that the autonomous agent allowed joint problem solving in the game?
Table A.5: Teammate Trust Survey
Team Effectiveness Survey
Indicate the degree to which you agree with each statement in regards to your team
in the last (3rd) game (Likert 5, Strongly Disagree Strongly Agree).
300
Team members ’carried their weight’ during the task.
Members were highly committed to the team during the task.
The researcher will be satisfied with the team product.
People outside of the team would give the team positive feedback about this work
today.
The researcher would be satisfied with the team’s performance.
Team members worked better together at the end of the task than at the beginning.
Team members were more aware of group dynamics at the end of the task than when
they began the task.
Being a part of this team helped members appreciate different types of people.
Table A.6: Team Effectiveness Survey
Team Workload Survey
Please read each statement carefully and indicate the response that you feel most
accurately describes your views and experiences during the final (3rd) game you
played. THERE ARE NO RIGHT OR WRONG ANSWERS. Please answer honestly
and do not skip any questions (21-point scale questions, Very Low Very High).
How mentally demanding was the task?
How physically demanding was the task?
How hurried or rushed was the pace of the task?
How hard did you have to work to accomplish your level of performance?
How insecure, discouraged, irritated, stressed, and annoyed were you?
How successful were you in accomplishing what you were asked to do?*
301
Table A.7: Team Workload Survey
Influence and Power Survey
Please answer the following questions regarding your perceptions of the last artificial
teammate (3rd game) you worked with. There are no wrong answers (7-point Likert
scale, Strongly Disagree Strongly Agree).
I have the potential to influence your team’s performance and actions.
I am confident in my ability to influence my team’s performance and actions.
The artificial teammate reacts to my attempts at influencing the team’s performance
and actions.
I am likely to react to my artificial teammates’ attempts at influencing the team’s
performance and actions.*
I have more influence over the team’s performance and actions than my artificial
teammate.
Table A.8: Influence and Power Survey
Artificial Teammate Acceptance Survey
Overall, judgements of the artificial agent on my last team are (5 point Likert):
Useful Useless*
Pleasant Unpleasant*
Bad Good
Effective Superfluous*
302
Irritating Likeable
Assisting Worthless*
Undesirable Desirable
Raising alertness Sleep inducing
Table A.9: Artificial Teammate Acceptance Survey
Study 2 Demographics
General Demographics
Enter your Age: (Number Entry)
Specify your identified gender: (Male, Female, Non-binary/third gender, Prefer not
to say, Prefer to specify)
Specify one or more races you consider yourself to be: (White, Black or African-
American, Latino or Hispanic, Asian, Native American, Native Hawaiian or Pacific
Islander, Prefer to Specify, Prefer not to say)
Is English your first language (yes, no)
Current level of education: (High School Diploma, Some Undergraduate, Finished
B.S. or B.A., Some Graduate School, Finished Masters Degree, Finished Ph.D.)
Do you work in a software domain? (Yes, no)
Which statement best describes your current employment status? (Working (paid
employee), Working (self-employed), Not working (temporary layoff from a job), Nor
working (looking for work), Not working (retired), Not working (disabled), Not work-
ing (other), Prefer not to answer
How many hours do you spend on a computer, smartphone, or tablet for your job on
a typical work day? (1¡-¿15)
303
How many hours do you spend on a computer, smartphone, or tablet during your
freetime on a typical day? (1¡-¿15)
Table A.10: Study 2 Demographics
Need for Power Scale
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Disagree Strongly Agree).
Personalized Need for Power
I wouldn’t care what I am doing as long as I can get ahead in my job.
I desire to go down in history as a famous and powerful individual.
I want to have authority over others so I can tell them what to do whether they like
it or not.
If I need to make others unhappy to move forward in life, then so be it.
I’d be willing to switch companies or jobs at a moment’s notice if it could enhance
my own career and status.
It is important to me that people know when I am the source of successful initiatives
or ideas.
To achieve my personal goals, it is necessary to take advantage of other people.
It doesn’t matter why people listen to me, as long as they do.
People can either respect or fear me, as long as they do what I say.
Socialized Need for Power
304
It is important to me that my ideas and opinions have a positive impact on others.
I need to feel like I can have a positive impact on the lives of those around me.
I am motivated to one day use my influence on others for the greater good.
I want to be able to have the power to help others succeed.
I feel it is important to make major influential decisions based on the opinion of all
my peers.
I want to have the power to ensure justice and equality are maintained for all.
I strive to be an influential person who can impact the greater good.
I want to become successful while making those around me successful as well.
Table A.11: Need for Power Scale
Motivation to Lead Scale
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Disagree Strongly Agree).
Affective-Identity MTL
Most of the time, I prefer being a leader rather than a follower when working in a
group.
I am the type of person who is not interested to lead others.*
I am definitely not a leader by nature.*
I am the type of person who likes to be in charge of others.
I believe I can contribute more to a group if I am a follower rather than a leader.*
305
I usually want to be the leader in the groups that I work in.
I am the type who would actively support a leader but prefers not to be appointed
as leader.*
I have a tendency to take charge in most groups or teams that I work in.
I am seldom reluctant to be the leader of a group.
Noncalculative MTL
I am only interested to lead a group if there are clear advantages for me.*
I will never agree to lead if I cannot see any benefits from accepting that role.*
I would only agree to be a group leader if I know I can benefit from that role.*
I would agree to lead others even if there are no special rewards or benefits with that
role.
I would want to know “what’s in it for me” if I am going to agree to lead a group.*
I never expect to get more privileges if I agree to lead a group.
If I agree to lead a group, I would never expect any advantages or special benefits.
I have more of my own problems to worry about than to be concerned about the rest
of the group.*
Leading others is really more of a dirty job rather than an honorable one.*
Social-Normative MTL
I feel that I have a duty to lead others if I am asked.
I agree to lead whenever I am asked or nominated by the other members.
I was taught to believe in the value of leading others.
It is appropriate for people to accept leadership roles or positions when they are
asked.
306
I have been taught that I should always volunteer to lead others if I can.
It is not right to decline leadership roles.
It is an honor and privilege to be asked to lead.
People should volunteer to lead rather than wait for others to ask or vote for them.
I would never agree to lead just because others voted for me.*
Table A.12: Motivation to Lead Scale
Creature of Habit Scale
Here are some statements relating to behaviors, feelings, or preferences that some
people may have. Please indicate the extent to which you agree with each statement
with regard to yourself. Please answer honestly, as there are no right or wrong answers
(5-point Likert, Strongly Disagree Strongly Agree).
I like to park my car or bike always in the same place.
I generally cook with the same spices/flavourings.
When walking past a plate of sweets or biscuits, I can’t resist taking one.
I tend to go to bed at roughly the same time every night.
I often take a snack while on the go (e.g. when driving, walking down the street, or
surfing the web).
I quite happily work within my comfort zone rather than challenging myself, if I don’t
have to.
I tend to do things in the same order every morning (e.g. get up, go to the toilet,
have a coffee. . . ).
Eating crisps or biscuits straight out of the packet is typical of me.
Whenever I go into the kitchen, I typically look in the fridge.
307
I always try to get thee same seat in places such as on the bus, in the cinema, or in
church.
I often find myself finishing off a packet of biscuits just because it is lying there.
I normally buy the same foods from the same grocery store.
I rely on what is tried and tested rather than exploring something new.
I generally eat the same things for breakfast every day.
I tend to like routine.
I usually treat myself to a snack at the end of the workday
In a restaurant, I tend to order dishes that I am familiar with.
I am one of those people who get really annoyed by last minute cancellations.
I often find myself eating without being aware of it.
I usually sit at the same place at the dinner table.
I often find myself running on ’autopilot’, and then wonder why I ended up in a
particular place
I always follow a certain order when preparing a meal.
Television makes me particularly prone to uncontrolled eating.
I tend to stick with the version of the software package that I am familiar with for as
long as I can.
I often find myself opening up the cabinet to take a snack.
I am prone to eating more when I feel stressed.
I find comfort in regularity.
Table A.13: Creature of Habit
Big Five Personality - Mini IPIP
308
Here are a number of characteristics that may or may not apply to you. For example,
do you agree that you are someone who likes to spend time with others? Please
indicate your agreement with each of the following statements about yourself (5-point
Likert, Strongly Disagree Strongly Agree).
Surgency or Extraversion
Am the life of the party.
Talk to a lot of different people at parties.
Don’t talk a lot.*
Keep in the background.*
Agreeableness
Sympathize with others’ feelings.
Feel others’ emotions.
Am not really interested in others.*
Am not interested in other people’s problems.*
Conscientiousness
I get chores done right away.
I like order.
I often forget to put things back in their proper place.*
I make a mess of things.*
Neuroticism
I have frequent mood swings.
309
I get upset easily.
I am relaxed most of the time.*
I seldom feel blue.*
Intellect or Imagination
Have a vivid imagination.
Have difficulty understanding abstract ideas.*
Am not interested in abstract ideas.*
Do not have a good imagination.*
Table A.14: Big Five Personality - Mini IPIP
Workplace Fear of Missing Out
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Disagree Strongly Agree).
Information Exclusion
I worry that I might miss important work-related updates.
I worry that I might miss out on valuable work-related information.
I worry that I will miss out on important work-related news.
I worry that i will miss out on important information that is relevant to my job.
I worry that I will not know what is happening at work.
Relational Exclusion
310
I get anxious that I will miss out on an opportunity to make important business
connections.
I am constantly thinking that I might miss opportunities to strengthen business con-
tacts.
I am constantly thinking that I might miss opportunities to make new business con-
tacts.
I worry that I will miss out on networking opportunities that my coworkers will have.
I fear that my coworkers might make business contacts that I won’t make.
Table A.15: Workplace Fear of Missing Out
Cynical Attitudes Towards AI
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (6-point
Likert, Strongly Disagree Strongly Agree).
Artificial intelligence will not put itself out to help people.
Artificial intelligence will use somewhat unfair means to gain profit or an advantage
rather than lose it.
Artificial intelligence will not care much what happens to you.
I think artificial intelligence will lie in order to get ahead.
If artificial intelligence did something nice for me, I would wonder its hidden reasons
for doing it.
Table A.16: Cynical Attitudes Towards AI
311
General Computer Self-Efficacy
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Disagree Strongly Agree).
I believe I have the ability to unpack and set up a new computer.
I believe I have the ability to describe how a computer works.
I believe I have the ability to install new software applications on a computer.
I believe I have the ability to identify and correct common operational problems with
a computer.
I believe I have the ability to remove information from a computer that I no longer
need.
I believe I have the ability to understand common operational problems with a com-
puter.
I believe I have the ability to use a computer to display or present information in a
desired manner.
Table A.17: General Computer Self-Efficacy
Computing Technology Continuum of Perspective Scale
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Disagree Strongly Agree).
312
Computers are capable of telling doctors how to treat medical problems.
Computers are capable of effectively teaching people.
Computers are capable of facilitating large group meetings.
Computers are capable of remembering things.
Computers are capable of learning from their experiences.
Computers are capable of caring for children.
Computers are capable of holding intelligent conversations.
Help-menus are capable of telling you the answer when you have questions.
When I play a game with a computer, I worry that it might cheat.
I have used a computer who didn’t like me.
Computers are capable of controlling my actions.
Computers are capable of infringing on personal rights and freedoms.
I have had my privacy invaded by a computer.
Table A.18: Computing Technology Continuum of Perspective Scale
Human-Machine-Interaction-Interdependence Questionaire
Please read each statement carefully and select the one response that you feel most
accurately describes your views and experiences. THERE ARE NO RIGHT OR
WRONG ANSWERS. Please answer honestly and do not skip any questions (5-point
Likert, Strongly Disagree Strongly Agree).
Conflict
I reject the system’s preferred actions.
We can both achieve our preferred outcomes in this situation.*
313
Our preferred outcomes in this situation are in conflict.
The system prefers a different outcome than I do in this situation.
I prefer a different outcome than the system in this situation.
Future Interdependence System to Human
The outcome of this situation affects how the system will interact with me in the
future.
My behavior in this situation affects how the system will interact with me in the
future.
My behavior in this situation affects how the system will behave in the future.
My behavior in this situation has no effect on how the system will behave in the
future.*
Future Interdependence Human to System
The outcomes of this situation affects how I will interact with the system in the
future.
The behavior of the system in this situation affects how I will interact with the system
in the future.
The behavior of the system in this situation has an impact on how I will behave in
the future.
The behavior of the system in this situation has not effect on how I will behave in
the future.*
**Information Certainty System to Human
The system understands how its action affects me.
314
The system knows what I plan to do in this situation.
The system is aware of my planned action in this situation.
The system knows why I prefer a certain action.
The system does not know what I plan to do in this situation.*
**Information Certainty Human to System
I understand how my action affects the system.
I know what the system is planning in this situation.
I am informed about the system’s planned action in this situation.
I know why the system prefers a certain action.
I do not know what the system is planning in this situation.*
Mutual Dependence
**We are dependent on each other in this situation.
We are both dependent on each other in this situation.
We need each other to achieve our best outcome in this situation.
The outcome of each depends on the behavior of the other.
**We need each other to resolve this situation.
We need to work together to manage this situation.
Power
Who felt they had the most influence on what happened in this situation?
Who felt they had the most influence on the action that was taken?
Who felt they had the least influence on what happened in the situation?*
Who did you feel had the least influence on the action carried out?*
315
Table A.19: Human-Machine-Interaction-Interdependence Questionaire
316
Appendix B Study 2 Vignettes
Scenario
A new AI [Tool,Teammate] has been developed and your software
development team has the opportunity to [Use,Accept] it. It will
write [5,20,35,50,65,80,95]% of the code you normally write.
If you choose to [Use,Accept] the AI [Tool,Teammate] into your team,
your coding responsibilities will be about [95,80,65,50,35,20,5]% of
what they currently are.
Please answer the following questions about your opinion of this op-
portunity. (7-point Likert Questions)
Question 1: I would be likely to [use,accept] the AI [Tool,Teammate]. (Strongly
Disagree Strongly Agree)
Question 2: If I were to [use,accept] the AI [Tool,Teammate], it would be helpful
to my team. (Strongly Disagree Strongly Agree)
Question 3: If I were to [use,accept] the AI [Tool,Teammate], my teammates would
still benefit from my skillset. (Strongly Disagree Strongly Agree)
Question 4: If my team were required to [use,accept] the AI [Tool,Teammate], I
would feel concerned for my job security. (Strongly Disagree
Strongly Agree)
Question 5: I think the AI [Tool,Teammate] would be capable of performing these
responsibilities. (Strongly Disagree Strongly Agree)
Table B.20: Study 1 Vignette Template
317
Scenario s
A new AI Teammate has been developed and your software develop-
ment team has the opportunity to accept it. As an AI Teamamte, the
Teammate would be able to perform the following responsibilities if
you were to accept it:
1 Checking Code for Spelling Errors and Typos
2 Checking Code for Logic Errors
3 Writing Code Inside Designed Function Blocks
4 Designing Code Based on a Software Development Plan
5 Creating a Software Development Plan Based Off of Written Require-
ments
If you were to Accept the Teammate, your responsibilities as a soft-
ware developer would be as follows:
1 Creating Written Requirements Based on Client Interviews
2 Interviewing Clients about the Requirements of the Software
3 Overseeing the AI Teammate
Other software developers at your company have given the following
endorsements of the AI teammate:
Their teams have accepted the AI teammate onto their team
Their teams have increased in productivity since using the AI team
member
They have enjoyed working with the AI teammate
Please answer the following questions about your opinion of this op-
portunity.
Question 1: I would be likely to accept the AI teammate. (Strongly Disagree
Strongly Agree)
Question 2: If I were to accept the AI teammate, it would be helpful to my team?
(Strongly Disagree Strongly Agree)
Question 3: If I were to accept the AI teammate, my teammates would still benefit
from my skillset? (Strongly Disagree Strongly Agree)
Question 4: If my team were required to accept the AI teammate, I would feel con-
cerned for my job security? (Strongly Disagree Strongly Agree)
Question 5: I think the AI teammate would be capable of performing these re-
sponsibilities? (Strongly Disagree Strongly Agree)
Table B.21: Study 2 Example Vignette. Number of tasks changes as a within subjects
condition.
318
Bibliography
[1] James D Abbey and Margaret G Meloy. Attention by design: Using attention
checks to detect inattentive respondents and improve data quality. Journal of
Operations Management, 53:63–70, 2017.
[2] Aswin Thomas Abraham and Kevin McGee. AI for dynamic team-mate adap-
tation in games. In Proceedings of the 2010 IEEE Conference on Computational
Intelligence and Games, pages 419–426, August 2010. ISSN: 2325-4289.
[3] Dominic Abrams and Michael A. Hogg. Social Identification, Self-
Categorization and Social Influence. European Review of Social Psychology,
1(1):195–228, January 1990.
[4] Gwen Bachmann Achenreiner. Materialistic values and susceptibility to influ-
ence in children. ACR North American Advances, 1997.
[5] Amina Adadi and Mohammed Berrada. Peeking Inside the Black-Box: A Survey
on Explainable Artificial Intelligence (XAI). IEEE Access, 6:52138–52160, 2018.
Conference Name: IEEE Access.
[6] Marie-Colombe Afota, Ariane Ollier-Malaterre, and Christian Vandenberghe.
How supervisors set the tone for long hours: Vicarious learning, subordinates’
self-motives and the contagion of working hours. Human Resource Management
Review, 29(4):100673, December 2019.
[7] Ritu Agarwal and Jayesh Prasad. Are Individual Differences Germane to the
Acceptance of New Information Technologies? Decision Sciences, 30(2):361–
391, 1999. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1540-
5915.1999.tb01614.x.
[8] Donatien Agbissoh OTOTE, Benshuai Li, Bo Ai, Song Gao, Jing Xu, Xiaoying
Chen, and Guannan Lv. A Decision-Making Algorithm for Maritime Search and
Rescue Plan. Sustainability, 11(7):2084, January 2019. Number: 7 Publisher:
Multidisciplinary Digital Publishing Institute.
319
[9] Sina Aghaei, Mohammad Javad Azizi, and Phebe Vayanos. Learning Optimal
and Fair Decision Trees for Non-Discriminative Decision-Making. Proceedings
of the AAAI Conference on Artificial Intelligence, 33(01):1418–1426, July 2019.
[10] Al-Imran Ahmed and Md Mahmudul Hasan. A hybrid approach for decision
making to detect breast cancer using data mining and autonomous agent based
on human agent teamwork. In 2014 17th International Conference on Computer
and Information Technology (ICCIT), pages 320–325, December 2014.
[11] Jungyong Ahn, Jungwon Kim, and Yongjun Sung. AI-powered recommenda-
tions: the roles of perceived similarity and psychological distance on persuasion.
International Journal of Advertising, 40(8):1366–1384, November 2021. Pub-
lisher: Routledge eprint: https://doi.org/10.1080/02650487.2021.1982529.
[12] Bo Ai, Benshuai Li, Song Gao, Jiangling Xu, and Hengshuai Shang. An Intel-
ligent Decision Algorithm for the Generation of Maritime Search and Rescue
Emergency Response Plans. IEEE Access, 7:155835–155850, 2019. Conference
Name: IEEE Access.
[13] Ibrahim M Al-Jabri and Narcyz Roztocki. Adoption of ERP systems: Does
information transparency matter? Telematics and Informatics, 32(2):300–310,
2015. Publisher: Elsevier.
[14] Noor Al-Sibai. Microsoft fires 10,000 employees as it invests in ai, Jan 2023.
[15] Ari Alam¨aki, Juho Pesonen, and Amir Dirin. Triggering effects of mobile video
marketing in nature tourism: Media richness perspective. Information Process-
ing & Management, 56(3):756–770, May 2019.
[16] Shaikha FS Alhashmi, Muhammad Alshurideh, Barween Al Kurdi, and Said A
Salloum. A systematic review of the factors affecting the artificial intelligence
implementation in the health care sector. In The International Conference on
Artificial Intelligence and Computer Vision, pages 37–49. Springer, 2020.
[17] Shaikha FS Alhashmi, Said A Salloum, and Sherief Abdallah. Critical success
factors for implementing artificial intelligence (ai) projects in dubai government
united arab emirates (uae) health sector: applying the extended technology
acceptance model (tam). In International Conference on Advanced Intelligent
Systems and Informatics, pages 393–405. Springer, 2019.
[18] Safinah Ali, Blakeley H Payne, Randi Williams, Hae Won Park, and Cynthia
Breazeal. Constructionism, ethics, and creativity: Developing primary and
middle school artificial intelligence education. In International Workshop on
Education in Artificial Intelligence K-12 (EDUAI’19), pages 1–4, 2019.
320
[19] Robert J Allio. Becoming a leader–first, take charge of your own learning
process. Strategy & Leadership, 2018.
[20] Maria Francisca Lies Ambarwati, Herlina Damaryanti, Harjanto Prabowo, and
Muhammad Hamsal. The Impact of a Digital Influencer to the Purchase De-
cision. IPTEK Journal of Proceedings Series, (5):220–224, December 2019.
Number: 5.
[21] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira
Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen,
et al. Guidelines for human-ai interaction. In Proceedings of the 2019 chi
conference on human factors in computing systems, pages 1–13, 2019.
[22] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira
Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen,
Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. Guidelines for Human-AI
Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, CHI ’19, pages 1–13, New York, NY, USA, May 2019.
Association for Computing Machinery.
[23] Michael Anderson and Susan Leigh Anderson. How should ai be developed,
validated, and implemented in patient care? AMA Journal of Ethics, 21(2):125–
130, 2019.
[24] Elisabeth Andr´e, Elisabetta Bevacqua, Dirk Heylen, Radoslaw Niewiadom-
ski, Catherine Pelachaud, Christopher Peters, Isabella Poggi, and Matthias
Rehm. Non-verbal Persuasion and Communication in an Affective Agent.
In Roddy Cowie, Catherine Pelachaud, and Paolo Petta, editors, Emotion-
Oriented Systems: The Humaine Handbook, Cognitive Technologies, pages 585–
608. Springer, Berlin, Heidelberg, 2011.
[25] Gabriela N. Aranda, Aurora Vizcaino, Alejandra Cechich, Mario Piattini, and
Jose Jesus Castro-Schez. Cognitive-Based Rules as a Means to Select Suitable
Groupware Tools. In 2006 5th IEEE International Conference on Cognitive
Informatics, volume 1, pages 418–423, July 2006.
[26] Katrin Arning and Martina Ziefle. Different Perspectives on Technology Ac-
ceptance: The Role of Technology Type and Age. In Andreas Holzinger and
Klaus Miesenberger, editors, HCI and Usability for e-Inclusion, Lecture Notes
in Computer Science, pages 20–41, Berlin, Heidelberg, 2009. Springer.
[27] Gunjan Arora, Jayadev Joshi, Rahul Shubhra Mandal, Nitisha Shrivastava,
Richa Virmani, and Tavpritesh Sethi. Artificial Intelligence in Surveillance,
Diagnosis, Drug Discovery and Vaccine Development against COVID-19.
321
Pathogens, 10(8):1048, August 2021. Number: 8 Publisher: Multidisciplinary
Digital Publishing Institute.
[28] Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael
Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss,
Aleksandra Mojsilovi´c, Sami Mourad, Pablo Pedemonte, Ramya Raghaven-
dra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder
Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. One Explanation
Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques.
arXiv:1909.03012 [cs, stat], September 2019. arXiv: 1909.03012.
[29] Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael
Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Alek-
sandra Mojsilovic, and others. AI Explainability 360: An Extensible Toolkit
for Understanding Data and Machine Learning Models. J. Mach. Learn. Res.,
21(130):1–6, 2020.
[30] Minoru Asada, Hiroaki Kitano, Itsuki Noda, and Manuela Veloso. RoboCup:
Today and tomorrow—What we have learned. Artificial Intelligence,
110(2):193–214, June 1999.
[31] Jan Auernhammer. Human-centered ai: The role of human-centered design
research in the development of ai. 2020.
[32] Katrin Auspurg and Thomas Hinz. Factorial Survey Experiments. SAGE Pub-
lications, November 2014. Google-Books-ID: 1jieBQAAQBAJ.
[33] David H. Autor. Why Are There Still So Many Jobs? The History and Fu-
ture of Workplace Automation. Journal of Economic Perspectives, 29(3):3–30,
September 2015.
[34]
¨
Omer Aydın and Enis Karaarslan. Openai chatgpt generated literature review:
Digital twin in healthcare. Available at SSRN 4308687, 2022.
[35] Daniel G. Bachrach, Benjamin C. Powell, Brian J. Collins, and R. Glenn Richey.
Effects of task interdependence on the relationship between helping behavior
and group performance. Journal of Applied Psychology, 91(6):1396–1405, 2006.
Place: US Publisher: American Psychological Association.
[36] Daniel G Bachrach, Benjamin C Powell, Brian J Collins, and R Glenn Richey.
Effects of task interdependence on the relationship between helping behavior
and group performance. Journal of Applied Psychology, 91(6):1396, 2006.
[37] Anthony L. Baker, Sean M. Fitzhugh, Lixiao Huang, Daniel E. Forster, An-
gelique Scharine, Catherine Neubauer, Glenn Lematta, Shawaiz Bhatti, Craig J.
322
Johnson, Andrea Krausman, Eric Holder, Kristin E. Schaefer, and Nancy J.
Cooke. Approaches for assessing communication in human-autonomy teams.
Human-Intelligent Systems Integration, 3(2):99–128, June 2021.
[38] Jerry Ball, Christopher Myers, Andrea Heiberg, Nancy J. Cooke, Michael
Matessa, Mary Freiman, and Stuart Rodgers. The synthetic teammate project.
Computational and Mathematical Organization Theory, 16(3):271–299, Septem-
ber 2010.
[39] Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, and Daniel S. Weld.
Is the most accurate ai the best teammate? optimizing ai for teamwork. In
Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages
11405–11414, 2021. Issue: 13.
[40] Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld,
and Eric Horvitz. Beyond Accuracy: The Role of Mental Models in Human-AI
Team Performance. Proceedings of the AAAI Conference on Human Computa-
tion and Crowdsourcing, 7:2–11, October 2019.
[41] Gagan Bansal, Besmira Nushi, Ece Kamar, Dan Weld, Walter Lasecki, and
Eric Horvitz. A Case for Backward Compatibility for Human-AI Teams.
arXiv:1906.01148 [cs, stat], June 2019. arXiv: 1906.01148.
[42] Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki,
and Eric Horvitz. Updates in Human-AI Teams: Understanding and Addressing
the Performance/Compatibility Tradeoff. Proceedings of the AAAI Conference
on Artificial Intelligence, 33(01):2429–2437, July 2019. Number: 01.
[43] Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi,
Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. Does the Whole Exceed its
Parts? The Effect of AI Explanations on Complementary Team Performance.
In Proceedings of the 2021 CHI Conference on Human Factors in Computing
Systems, CHI ’21, pages 1–16, New York, NY, USA, May 2021. Association for
Computing Machinery.
[44] Daniel Barber, Sergey Leontyev, Bo Sun, Larry Davis, Denise Nicholson, and
Jessie Y.C. Chen. The mixed-initiative experimental testbed for collaborative
human robot interactions. In 2008 International Symposium on Collaborative
Technologies and Systems, pages 483–489, May 2008.
[45] Paul Barratt. Healthy competition: A qualitative study investigating persuasive
technologies and the gamification of cycling. Health & Place, 46:328–336, July
2017.
323
[46] Shishir Bashyal and Ganesh Kumar Venayagamoorthy. Human swarm interac-
tion for radiation source search and localization. In 2008 IEEE Swarm Intelli-
gence Symposium, pages 1–8. IEEE, 2008.
[47] Rajeev Batra, Pamela M. Homer, and Lynn R. Kahle. Values, Susceptibil-
ity to Normative Influence, and Attribute Importance Weights: A Nomologi-
cal Analysis. Journal of Consumer Psychology, 11(2):115–128, 2001. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1207/S15327663JCP1102 04.
[48] Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman,
Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino,
Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan
Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder
Singh, Kush R. Varshney, and Yunfeng Zhang. AI Fairness 360: An Extensible
Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic
Bias. arXiv:1810.01943 [cs], October 2018. arXiv: 1810.01943.
[49] Jay Belsky. Variation in Susceptibility to Environmental Influence: An Evolu-
tionary Argument. Psychological Inquiry, 8(3):182–186, July 1997.
[50] Brett Bethke, Jonathan P How, and John Vian. Group health management
of uav teams with applications to persistent surveillance. In 2008 American
Control Conference, pages 3145–3150. IEEE, 2008.
[51] Amisha Bhargava, Marais Bester, and Lucy Bolton. Employees’ perceptions
of the implementation of robotics, artificial intelligence, and automation (raia)
on job satisfaction, job security, and employability. Journal of Technology in
Behavioral Science, 6(1):106–113, 2021.
[52] Anol Bhattacherjee and Michael Harris. Individual Adaptation of
Information Technology. Journal of Computer Information Systems,
50(1):37–45, September 2009. Publisher: Taylor & Francis eprint:
https://www.tandfonline.com/doi/pdf/10.1080/08874417.2009.11645360.
[53] Shawaiz Bhatti, Mustafa Demir, Nancy J Cooke, and Craig J Johnson. As-
sessing communication and trust in an ai teammate in a dynamic task environ-
ment. In 2021 IEEE 2nd International Conference on Human-Machine Systems
(ICHMS), pages 1–6. IEEE, 2021.
[54] Yochanan E. Bigman and Kurt Gray. Life and death decisions of autonomous
vehicles. Nature, 579(7797):E1–E2, March 2020. Number: 7797 Publisher:
Nature Publishing Group.
[55] Alessandro Blasimme and Effy Vayena. The ethics of ai in biomedical research,
patient care and public health. Patient Care and Public Health (April 9, 2019).
Oxford Handbook of Ethics of Artificial Intelligence, Forthcoming, 2019.
324
[56] Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lil-
ian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Peg-
man, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby, and Alan Win-
field. Principles of robotics: regulating robots in the real world. Connec-
tion Science, 29(2):124–129, April 2017. Publisher: Taylor & Francis eprint:
https://doi.org/10.1080/09540091.2016.1271400.
[57] Alexandros Bousdekis, Stefan Wellsandt, Enrica Bosani, Katerina Lepenioti,
Dimitris Apostolou, Karl Hribernik, and Gregoris Mentzas. Human-AI Col-
laboration in Quality Control with Augmented Manufacturing Analytics. In
Alexandre Dolgui, Alain Bernard, David Lemoine, Gregor von Cieminski, and
David Romero, editors, Advances in Production Management Systems. Artificial
Intelligence for Sustainable and Resilient Production Systems, IFIP Advances
in Information and Communication Technology, pages 303–310, Cham, 2021.
Springer International Publishing.
[58] Jeffrey M. Bradshaw, Paul J. Feltovich, Matthew J. Johnson, Larry Bunch,
Maggie R. Breedy, Tom Eskridge, Hyuckchul Jung, James Lott, and Andrzej
Uszok. Coordination in Human-Agent-Robot Teamwork. In 2008 International
Symposium on Collaborative Technologies and Systems, pages 467–476, May
2008.
[59] Michael T Brannick, Regina M Roach, and Eduardo Salas. Understanding team
performance: A multimethod study. Human Performance, 6(4):287–308, 1993.
[60] J Scott Brennen, Philip N Howard, and Rasmus Kleis Nielsen. An Industry-Led
Debate: How UK Media Cover Artificial Intelligence. page 10.
[61] Mark J. Brosnan. Technophobia: The Psychological Impact of Information
Technology. Routledge, London, June 1998.
[62] Fr´ed´eric F. Brunel and Michelle R. Nelson. Message Order Effects and Gen-
der Differences in Advertising Persuasion. Journal of Advertising Research,
43(3):330–341, September 2003. Publisher: Cambridge University Press.
[63] Gabriele Buchholtz. Artificial Intelligence and Legal Tech: Challenges to the
Rule of Law. In Thomas Wischmeyer and Timo Rademacher, editors, Regulating
Artificial Intelligence, pages 175–198. Springer International Publishing, Cham,
2020.
[64] Joy Buolamwini and Timnit Gebru. Gender Shades: Intersectional Accuracy
Disparities in Commercial Gender Classification. In Proceedings of the 1st Con-
ference on Fairness, Accountability and Transparency, pages 77–91. PMLR,
January 2018.
325
[65] C. Shawn Burke, Kevin C. Stagl, Eduardo Salas, Linda Pierce, and Dana
Kendall. Understanding team adaptation: A conceptual analysis and model.
Journal of Applied Psychology, 91(6):1189–1207, 2006. Place: US Publisher:
American Psychological Association.
[66] Andrew Burton-Jones and Geoffrey S. Hubona. Individual differences and usage
behavior: revisiting a technology acceptance model assumption. ACM SIGMIS
Database: the DATABASE for Advances in Information Systems, 36(2):58–77,
June 2005.
[67] Andrew Burton-Jones and Geoffrey S. Hubona. The mediation of external
variables in the technology acceptance model. Information & Management,
43(6):706–717, September 2006.
[68] John Christian Busch and Richard M. Jaeger. Influence of Type of Judge, Nor-
mative Information, and Discussion on Standards Recommended for the Na-
tional Teacher Examinations. Journal of Educational Measurement, 27(2):145–
163, 1990. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1745-
3984.1990.tb00739.x.
[69] Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael
Terry. ”Hello AI”: Uncovering the Onboarding Needs of Medical Practition-
ers for Human-AI Collaborative Decision-Making. Proceedings of the ACM on
Human-Computer Interaction, 3(CSCW):104:1–104:24, November 2019.
[70] David F. Caldwell and Charles A. O’Reilly. The Determinants of Team-Based
Innovation in Organizations: The Role of Social Influence. Small Group Re-
search, 34(4):497–517, August 2003. Publisher: SAGE Publications Inc.
[71] Gloria Calhoun, Jessica Bartik, Heath Ruff, Kyle Behymer, and Elizabeth Frost.
Enabling human-autonomy teaming with multi-unmanned vehicle control inter-
faces. Human-Intelligent Systems Integration, 3(2):155–174, June 2021.
[72] D. Cao, H. Tao, Y. Wang, A. Tarhini, and S. Xia. Acceptance of
automation manufacturing technology in China: an examination of per-
ceived norm and organizational efficacy. Production Planning & Con-
trol, 31(8):660–672, June 2020. Publisher: Taylor & Francis eprint:
https://doi.org/10.1080/09537287.2019.1669091.
[73] John Carbone and James Crowder. Collaborative Shared Awareness: Human-AI
Collaboration. July 2014.
[74] Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter
Abbeel, and Anca Dragan. On the Utility of Learning about Humans for
Human-AI Coordination. In Advances in Neural Information Processing Sys-
tems, volume 32. Curran Associates, Inc., 2019.
326
[75] Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter
Abbeel, and Anca Dragan. On the utility of learning about humans for human-
ai coordination. Advances in neural information processing systems, 32, 2019.
[76] Dorwin Cartwright. Influence, Leadership, Control. SSRN Scholarly Paper ID
1497766, Social Science Research Network, Rochester, NY, 1965.
[77] Arturo Casadevall and Ferric C Fang. Winner takes all. Scientific American,
307(2):13–17, 2012. Publisher: JSTOR.
[78] K Catchpole, A Mishra, A Handa, and P McCulloch. Teamwork and error in the
operating room: analysis of skills and roles. Annals of surgery, 247(4):699–706,
2008.
[79] Stephen Cave, Claire Craig, Kanta Dihal, Sarah Dillon, Jessica Montgomery,
Beth Singler, and Lindsay Taylor. Portrayals and perceptions of AI and why
they matter. Report, The Royal Society, December 2018. Accepted: 2018-12-
19T05:57:26Z.
[80] Jessie Y. C. Chen. Human-autonomy teaming in military settings. Theoretical
Issues in Ergonomics Science, 19(3):255–258, May 2018. Publisher: Taylor &
Francis eprint: https://doi.org/10.1080/1463922X.2017.1397229.
[81] Jessie Y. C. Chen and Michael J. Barnes. Human–Agent Teaming for Multirobot
Control: A Review of Human Factors Issues. IEEE Transactions on Human-
Machine Systems, 44(1):13–29, February 2014.
[82] Jessie Y. C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz,
Julia L. Wright, and Michael Barnes. Situation awareness-based agent trans-
parency and human-autonomy teaming effectiveness. Theoretical Issues in
Ergonomics Science, 19(3):259–282, May 2018. Publisher: Taylor & Francis
eprint: https://doi.org/10.1080/1463922X.2017.1315750.
[83] Ke Chen and Alan H. S. Chan. A review of technology acceptance by older
adults. Gerontechnology, 10(1):1–12, 2011. Place: Netherlands Publisher: In-
ternational Society for Gerontechnology.
[84] Wei Chen, Edmund Durfee, and Melanie Dumas. Human agent collaboration
in a simulated combat medical scenario. In 2009 International Symposium on
Collaborative Technologies and Systems, pages 367–375, May 2009.
[85] Erin K Chiou, Mustafa Demir, Verica Buchanan, Christopher C Corral, Mica R
Endsley, Glenn J Lematta, Nancy J Cooke, and Nathan J McNeese. Towards
human–robot teaming: tradeoffs of explanation-based communication strategies
in a virtual search and rescue task. International Journal of Social Robotics,
14(5):1117–1136, 2022.
327
[86] Abhinav Choudhry, Jinda Han, Xiaoyu Xu, and Yun Huang. ”I Felt a Little
Crazy Following a ’Doll’”: Investigating Real Influence of Virtual Influencers
on Their Followers. Proceedings of the ACM on Human-Computer Interaction,
6(GROUP):43:1–43:28, January 2022.
[87] Jessica Siegel Christian, Michael S. Christian, Matthew J. Pearsall, and Erin C.
Long. Team adaptation in context: An integrated conceptual model and
meta-analytic review. Organizational Behavior and Human Decision Processes,
140:62–89, May 2017.
[88] Jae Eun Chung, Namkee Park, Hua Wang, Janet Fulk, and Margaret McLaugh-
lin. Age differences in perceptions of online community participation among
non-users: An extension of the Technology Acceptance Model. Computers in
Human Behavior, 26(6):1674–1684, November 2010.
[89] Mohammad Chuttur. Overview of the technology acceptance model: Origins,
developments and future directions. 2009.
[90] Richard E Clark. Fostering the work motivation of individuals and teams.
Performance improvement, 42(3):21–29, 2003.
[91] Vary T. Coates. OFFICE AUTOMATION: PRODUCTIVITY, EMPLOY-
MENT AND SOCIAL IMPACTS. Office Technology and People, 4(3):315–326,
January 1988. Publisher: MCB UP Ltd.
[92] Jacob Cohen. Statistical power analysis for the behavioral sciences. Routledge,
2013.
[93] Graeme W. Coleman, Lorna Gibson, Vicki L. Hanson, Ania Bobrowicz, and
Alison McKay. Engaging the disengaged: how do we design technology for
digitally excluded older adults? In Proceedings of the 8th ACM Conference on
Designing Interactive Systems, DIS ’10, pages 175–178, New York, NY, USA,
August 2010. Association for Computing Machinery.
[94] Phoebe Constantinou. Promoting healthy competition using modified rules and
sports from other cultures. Strategies, 27(4):29–33, 2014.
[95] Nancy J Cooke, Polemnia G Amazeen, Jamie C Gorman, Stephen J Guastello,
Aaron Likens, and Ron Stevens. Modeling the complex dynamics of teamwork
from team cognition to neurophysiology. In Proceedings of the Human Factors
and Ergonomics Society Annual Meeting, volume 56, pages 183–187. SAGE
Publications Sage CA: Los Angeles, CA, 2012.
[96] Nancy J. Cooke, Jamie C. Gorman, Christopher Myers, and Jasmine Duran.
Theoretical underpinning of interactive team cognition. In Theories of team
328
cognition: Cross-disciplinary perspectives, Series in applied psychology, pages
187–207. Routledge/Taylor & Francis Group, New York, NY, US, 2012.
[97] Nancy J. Cooke, Jamie C. Gorman, Christopher W. Myers, and Jasmine L.
Duran. Interactive Team Cognition. Cognitive Science, 37(2):255–285, 2013.
eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cogs.12009.
[98] Christian Coons and Michael Weber. Manipulation: theory and practice. Oxford
University Press, 2014.
[99] Anonio Correia, Benjamim Fonseca, Hugo Paredes, Ramon Chaves, Daniel
Schneider, and Shoaib Jameel. Determinants and predictors of intentionality
and perceived reliability in human-ai interaction as a means for innovative sci-
entific discovery. In 2021 IEEE International Conference on Big Data (Big
Data), pages 3681–3684. IEEE, 2021.
[100] James A Crowder and John N Carbone. Collaborative shared awareness:
human-ai collaboration. In Proceedings of the International Conference on In-
formation and Knowledge Engineering (IKE). The Steering Committee of The
World Congress in Computer Science, Computer Engineering and Applied Com-
puting (WorldComp), volume 1, 2014.
[101] Tammy N. Crutchfield and Kimberly Klamon. Assessing the Dimen-
sions and Outcomes of an Effective Teammate. Journal of Education
for Business, 89(6):285–291, August 2014. Publisher: Routledge eprint:
https://doi.org/10.1080/08832323.2014.885873.
[102] N. Dahlb¨ack, A. onsson, and L. Ahrenberg. Wizard of Oz studies why and
how. Knowledge-Based Systems, 6(4):258–266, December 1993.
[103] Micael Dahl´en, Alexandra Rasch, and Sara Rosengren. Love at First Site? A
Study of Website Advertising Effectiveness. Journal of Advertising Research,
43(1):25–33, March 2003. Publisher: Cambridge University Press.
[104] Jeffrey Dalton, Victor Ajayi, and Richard Main. Vote Goat: Conversational
Movie Recommendation. In The 41st International ACM SIGIR Conference on
Research & Development in Information Retrieval, SIGIR ’18, pages 1285–1288,
New York, NY, USA, June 2018. Association for Computing Machinery.
[105] Fred D Davis. A technology acceptance model for empirically testing new end-
user information systems: Theory and results. PhD Thesis, Massachusetts
Institute of Technology, 1985.
[106] Fred D. Davis. Perceived Usefulness, Perceived Ease of Use, and User Accep-
tance of Information Technology. MIS Quarterly, 13(3):319–340, 1989. Pub-
lisher: Management Information Systems Research Center, University of Min-
nesota.
329
[107] Leslie A DeChurch and Jessica R Mesmer-Magnus. The cognitive underpinnings
of effective teamwork: a meta-analysis. Journal of applied psychology, 95(1):32,
2010.
[108] Manlio Del Giudice, Veronica Scuotto, Beatrice Orlando, and Mario Mustilli.
Toward the human Centered approach. A revised model of individual accep-
tance of AI. Human Resource Management Review, page 100856, September
2021.
[109] Mustafa Demir, Aaron D. Likens, Nancy J. Cooke, Polemnia G. Amazeen, and
Nathan J. McNeese. Team coordination and effectiveness in human-autonomy
teaming. IEEE Transactions on Human-Machine Systems, 49(2):150–159, 2018.
Publisher: IEEE.
[110] Mustafa Demir, Nathan J. McNeese, and Nancy J. Cooke. Team synchrony
in human-autonomy teaming. In International conference on applied human
factors and ergonomics, pages 303–312. Springer, 2017.
[111] Mustafa Demir, Nathan J McNeese, and Nancy J Cooke. Understanding human-
robot teams in light of all-human teams: Aspects of team interaction and shared
cognition. International Journal of Human-Computer Studies, 140:102436,
2020.
[112] Mustafa Demir, Nathan J. McNeese, Jaime C. Gorman, Nancy J. Cooke,
Christopher W. Myers, and David A. Grimm. Exploration of Teammate Trust
and Interaction Dynamics in Human-Autonomy Teaming. IEEE Transactions
on Human-Machine Systems, 51(6):696–705, December 2021. Conference Name:
IEEE Transactions on Human-Machine Systems.
[113] Alan R. Dennis and Susan T. Kinney. Testing Media Richness Theory in the
New Media: The Effects of Cues, Feedback, and Task Equivocality. Information
Systems Research, 9(3):256–274, September 1998. Publisher: INFORMS.
[114] Darleen M. DeRosa, Donald A. Hantula, Ned Kock, and John D’Arcy.
Trust and leadership in virtual teamwork: A media naturalness per-
spective. Human Resource Management, 43(2-3):219–232, 2004. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1002/hrm.20016.
[115] Susannah Kate Devitt, Jason Scholz, Timo Schless, and Larry Lewis. Develop-
ing a Trusted Human-AI Network for Humanitarian Benefit. arXiv:2112.11191
[cs], December 2021. arXiv: 2112.11191.
[116] Thomas G Dietterich and Eric J Horvitz. Rise of concerns about ai: reflections
and directions. Communications of the ACM, 58(10):38–40, 2015.
330
[117] Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. Algorithm aver-
sion: People erroneously avoid algorithms after seeing them err. Journal of
Experimental Psychology: General, 144(1):114–126, 2015. Place: US Publisher:
American Psychological Association.
[118] Daniel Dinello. Technophobia!: Science Fiction Visions of Posthuman Technol-
ogy. University of Texas Press, December 2006.
[119] Matias Dodel and Gustavo S. Mesch. Perceptions about the im-
pact of automation in the workplace. Information, Communication
& Society, 23(5):665–680, April 2020. Publisher: Routledge eprint:
https://doi.org/10.1080/1369118X.2020.1716043.
[120] Judith T. Dodenhoff. Interpersonal attraction and direct–indirect supervisor
influence as predictors of counselor trainee effectiveness. Journal of Counseling
Psychology, 28(1):47–52, 1981. Place: US Publisher: American Psychological
Association.
[121] Jenna Drenten and Gillian Brooks. Celebrity 2.0: Lil Miquela and the rise of a
virtual star system. Feminist Media Studies, 20(8):1319–1323, November 2020.
Publisher: Routledge eprint: https://doi.org/10.1080/14680777.2020.1830927.
[122] Tripp Driskell, James E Driskell, C Shawn Burke, and Eduardo Salas. Team
roles: A review and integration. Small Group Research, 48(4):482–511, 2017.
[123] Alpana Dubey, Kumar Abhinav, Sakshi Jain, Veenu Arora, and Asha Puttaveer-
ana. Haco: a framework for developing human-ai teaming. In Proceedings of
the 13th Innovations in Software Engineering Conference on Formerly known
as India Software Engineering Conference, pages 1–9, 2020.
[124] Phillip J. Durst and Wendell Gray. Levels of Autonomy and Autonomous
System Performance Assessment for Intelligent Unmanned Systems. Technical
report, ENGINEER RESEARCH AND DEVELOPMENT CENTER VICKS-
BURG MS GEOTECHNICAL AND STRUCTURES LAB, April 2014. Section:
Technical Reports.
[125] Amitava Dutta. Integrating AI and optimization for decision support: a survey.
Decision Support Systems, 18(3):217–226, November 1996.
[126] Amin Ebrahimzadeh, Mahfuzulhoq Chowdhury, and Martin Maier. Human-
Agent-Robot Task Coordination in FiWi-Based Tactile Internet Infrastructures
Using Context- and Self-Awareness. IEEE Transactions on Network and Ser-
vice Management, 16(3):1127–1142, September 2019. Conference Name: IEEE
Transactions on Network and Service Management.
331
[127] Ahmed Elgammal. AI Is Blurring the Definition of Artist: Advanced algorithms
are using machine learning to create art autonomously. American Scientist,
107(1):18–22, January 2019. Publisher: Sigma Xi, The Scientific Research So-
ciety.
[128] Holly Else. Can a major AI conference shed its reputation for hosting sexist
behaviour? Nature, 563(7731):610–612, November 2018. Publisher: Nature
Publishing Group.
[129] Joelle Emerson. Don’t give up on unconscious bias training—make it better.
Harvard Business Review, 28(4), 2017.
[130] Nadja Enke and Nils S. Borchers. Social Media Influencers in Strate-
gic Communication: A Conceptual Framework for Strategic Social Me-
dia Influencer Communication. International Journal of Strategic Com-
munication, 13(4):261–277, August 2019. Publisher: Routledge eprint:
https://doi.org/10.1080/1553118X.2019.1620234.
[131] Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller,
and Dylan D. Schmorrow. Trust Engineering for Human-AI Teams. Proceedings
of the Human Factors and Ergonomics Society Annual Meeting, 63(1):322–326,
November 2019. Publisher: SAGE Publications Inc.
[132] Seda Fabian. Artificial Intelligence and the Law: Will Judges Run on Punch
Cards. Common Law Review, 16:4–6, 2020.
[133] Emily Falk and Christin Scholz. Persuasion, Influence, and Value: Perspectives
from Communication and Social Neuroscience. Annual Review of Psychology,
69(1):329–356, 2018. eprint: https://doi.org/10.1146/annurev-psych-122216-
011821.
[134] Paulo Fernandes, Afonso Sales, Alan R. Santos, and Thais Webber. Perfor-
mance Evaluation of Software Development Teams: a Practical Case Study.
Electronic Notes in Theoretical Computer Science, 275:73–92, September 2011.
[135] Enrique Fern´andez-Mac´ıas, Emilia omez, Jos´e Hern´andez-Orallo, Bao Sheng
Loe, Bertin Martens, Fernando Mart´ınez-Plumed, and Song¨ul Tolan. A multi-
disciplinary task-based perspective for evaluating the impact of AI autonomy
and generality on the future of work. arXiv:1807.02416 [cs], July 2018. arXiv:
1807.02416.
[136] Emilio Ferrara. Contagion dynamics of extremist propaganda in social networks.
Information Sciences, 418-419:1–12, December 2017.
332
[137] Nicolas E. D´ıaz Ferreyra, Esma A¨ımeur, Hicham Hage, Maritta Heisel, and
Catherine Garc´ıa van Hoogstraten. Persuasion Meets AI: Ethical Considera-
tions for the Design of Social Engineering Countermeasures. Proceedings of the
12th International Joint Conference on Knowledge Discovery, Knowledge Engi-
neering and Knowledge Management, pages 204–211, 2020. arXiv: 2009.12853.
[138] Ibrahim Filiz, Jan Ren´e Judek, Marco Lorenz, and Markus Spiwoks. Reducing
algorithm aversion through experience. Journal of Behavioral and Experimental
Finance, 31:100524, September 2021.
[139] Christopher Flathmann, Nathan McNeese, and Lorenzo Barberis Canonico.
Using Human-Agent Teams to Purposefully Design Multi-Agent Systems.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
63(1):1425–1429, November 2019. Publisher: SAGE Publications Inc.
[140] Christopher Flathmann, McNeese Nathan, Beau Schelble, Bart Knijnenberg,
and Guo Freeman. Boldness and shyness: Exploring agent interactions to
facilitate effective human-agent teaming. Transactions on Computer-Human
Interaction, Under Review.
[141] Christopher Flathmann, Beau Schelble, Brock Tubre, Nathan McNeese, and
Paige Rodeghero. Invoking Principles of Groupware to Develop and Evaluate
Present and Future Human-Agent Teams. In Proceedings of the 8th Interna-
tional Conference on Human-Agent Interaction, HAI ’20, pages 15–24, New
York, NY, USA, November 2020. Association for Computing Machinery.
[142] Christopher Flathmann, Beau G. Schelble, and Nathan J. McNeese. Foster-
ing Human-Agent Team Leadership by Leveraging Human Teaming Princi-
ples. In 2021 IEEE 2nd International Conference on Human-Machine Systems
(ICHMS), pages 1–6, September 2021.
[143] Christopher Flathmann, Beau G. Schelble, Rui Zhang, and Nathan J. McNeese.
Modeling and Guiding the Creation of Ethical Human-AI Teams. In Proceedings
of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 469–479.
Association for Computing Machinery, New York, NY, USA, July 2021.
[144] Joseph P. Forgas and Kipling D. Williams. Social Influence: Direct and Indirect
Processes. Psychology Press, New York, September 2016.
[145] Jane E. Fountain. The moon, the ghetto and artificial intelligence: Reducing
systemic racism in computational algorithms. Government Information Quar-
terly, page 101645, October 2021.
[146] Ana Freire, Lorenzo Porcaro, and Emilia omez. Measuring Diversity of Arti-
ficial Intelligence Conferences. In Proceedings of 2nd Workshop on Diversity in
333
Artificial Intelligence (AIDBEI), pages 39–50. PMLR, September 2021. ISSN:
2640-3498.
[147] Noah E. Friedkin. A structural theory of social influence. A structural theory of
social influence. Cambridge University Press, New York, NY, US, 1998. Pages:
xix, 231.
[148] Noah E. Friedkin and Eugene C. Johnsen. Social influence and opinions. The
Journal of Mathematical Sociology, 15(3-4):193–206, January 1990. Publisher:
Routledge eprint: https://doi.org/10.1080/0022250X.1990.9990069.
[149] Mark A. Fuller, Andrew M. Hardin, and Robert M. Davison. Efficacy in
Technology-Mediated Distributed Teams. Journal of Management Informa-
tion Systems, 23(3):209–235, December 2006. Publisher: Routledge eprint:
https://doi.org/10.2753/MIS0742-1222230308.
[150] Sheng Gao, Jiazheng Wu, and Jianliang Ai. Multi-UAV reconnaissance task
allocation for heterogeneous targets using grouping ant colony optimization
algorithm. Soft Computing, 25(10):7155–7167, May 2021.
[151] Shuqing Gao, Lingnan He, Yue Chen, Dan Li, and Kaisheng Lai. Public Per-
ception of Artificial Intelligence in Medical Care: Content Analysis of Social
Media. Journal of Medical Internet Research, 22(7):e16649, July 2020. Com-
pany: Journal of Medical Internet Research Distributor: Journal of Medical In-
ternet Research Institution: Journal of Medical Internet Research Label: Jour-
nal of Medical Internet Research Publisher: JMIR Publications Inc., Toronto,
Canada.
[152] Colin Garvey and Chandler Maskal. Sentiment Analysis of the News Media
on Artificial Intelligence Does Not Support Claims of Negative Bias Against
Artificial Intelligence. OMICS: A Journal of Integrative Biology, 24(5):286–
299, May 2020. Publisher: Mary Ann Liebert, Inc., publishers.
[153] Timnit Gebru. Oxford Handbook on AI Ethics Book Chapter on Race and
Gender. arXiv:1908.06165 [cs], August 2019. arXiv: 1908.06165.
[154] Mahtab Ghazizadeh, John D. Lee, and Linda Ng Boyle. Extending the Technol-
ogy Acceptance Model to assess automation. Cognition, Technology & Work,
14(1):39–49, March 2012.
[155] David Gilbert, Liz Lee-Kelley, and Maya Barton. Technophobia, gender influ-
ences and consumer decision-making for technology-related products. European
Journal of Innovation Management, 6(4):253–263, January 2003. Publisher:
MCB UP Ltd.
334
[156] Alyssa Glass, Deborah L. McGuinness, and Michael Wolverton. Toward es-
tablishing trust in adaptive agents. In Proceedings of the 13th international
conference on Intelligent user interfaces, IUI ’08, pages 227–236, New York,
NY, USA, January 2008. Association for Computing Machinery.
[157] Theresa M. Glomb and Hui Liao. Interpersonal Aggression in Work Groups:
Social Influence, Reciprocal, and Individual Effects. Academy of Management
Journal, 46(4):486–496, August 2003. Publisher: Academy of Management.
[158] Morgan Glucksman. The rise of social media influencer marketing on lifestyle
branding: A case study of Lucie Fink. Elon Journal of undergraduate research
in communications, 8(2):77–87, 2017.
[159] Hanyoung Go, Myunghwa Kang, and SeungBeum Chris Suh. Machine learning
of robots in tourism and hospitality: interactive technology acceptance model
(iTAM) cutting edge. Tourism Review, 75(4):625–636, January 2020. Pub-
lisher: Emerald Publishing Limited.
[160] Jamie C Gorman, Nancy J Cooke, and Polemnia G Amazeen. Training adaptive
teams. Human Factors, 52(2):295–307, 2010.
[161] Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans.
Viewpoint: When Will AI Exceed Human Performance? Evidence from AI
Experts. Journal of Artificial Intelligence Research, 62:729–754, July 2018.
[162] Jonathan Gratch, Gale Lucas, and Aisha King. It’s Only a Computer: The
Impact of Human-agent Interaction in Clinical Interviews. page 8.
[163] Dr Paul Griffiths and Dr Mitt Nowshade Kabir. ECIAIR 2019 European Confer-
ence on the Impact of Artificial Intelligence and Robotics. Academic Conferences
and publishing limited, October 2019. Google-Books-ID: 8MXBDwAAQBAJ.
[164] Deborah H Gruenfeld, Paul V. Martorana, and Elliott T. Fan. What Do
Groups Learn from Their Worldliest Members? Direct and Indirect Influence
in Dynamic Teams. Organizational Behavior and Human Decision Processes,
82(1):45–59, May 2000.
[165] Stanley M Gully, Kara A Incalcaterra, Aparna Joshi, and J Matthew Beaubien.
A meta-analysis of team-efficacy, potency, and performance: interdependence
and level of analysis as moderators of observed relationships. Journal of applied
psychology, 87(5):819, 2002.
[166] Manjul Gupta, Carlos M. Parra, and Denis Dennehy. Questioning Racial and
Gender Bias in AI-based Recommendations: Do Espoused National Cultural
Values Matter? Information Systems Frontiers, June 2021.
335
[167] Dogan Gursoy, Oscar Hengxuan Chi, Lu Lu, and Robin Nunkoo. Consumers
acceptance of artificially intelligent (AI) device use in service delivery. Interna-
tional Journal of Information Management, 49:157–169, December 2019.
[168] Lawrence Hadley, Marc Poitras, John Ruggiero, and Scott
Knowles. Performance evaluation of National Football League
teams. Managerial and Decision Economics, 21(2):63–70, 2000.
eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/1099-
1468%28200003%2921%3A2%3C63%3A%3AAID-MDE964%3E3.0.CO%3B2-
O.
[169] Philipp Haindl, Georg Buchgeher, Maqbool Khan, and Bernhard Moser. To-
wards a Reference Software Architecture for Human-AI Teaming in Smart Man-
ufacturing. arXiv:2201.04876 [cs], January 2022. arXiv: 2201.04876.
[170] Justin Halberda, Mich`ele MM Mazzocco, and Lisa Feigenson. Individual differ-
ences in non-verbal number acuity correlate with maths achievement. Nature,
455(7213):665–668, 2008.
[171] Peter F Halpin and Alina A von Davier. Modeling collaboration using point
processes. In Innovative assessment of collaboration, pages 233–247. Springer,
2017.
[172] Sandra G. Hart and Lowell E. Staveland. Development of NASA-TLX (Task
Load Index): Results of Empirical and Theoretical Research. In Peter A. Han-
cock and Najmedin Meshkati, editors, Advances in Psychology, volume 52 of
Human Mental Workload, pages 139–183. North-Holland, January 1988.
[173] Daniel A. Hashimoto, Guy Rosman, Daniela Rus, and Ozanan R. Meireles. Arti-
ficial Intelligence in Surgery: Promises and Perils. Annals of surgery, 268(1):70–
76, July 2018.
[174] Daniel A. Hashimoto, Thomas M. Ward, and Ozanan R. Meireles. The Role of
Artificial Intelligence in Surgery. Advances in Surgery, 54:89–101, September
2020. Publisher: Elsevier.
[175] Hongwei He, Yehuda Baruch, and Chieh-Peng Lin. Modeling team knowledge
sharing and team flexibility: The role of within-team competition. Human
Relations, 67(8):947–978, August 2014. Publisher: SAGE Publications Ltd.
[176] D. Benjamin Hellar and Michael McNeese. NeoCITIES: A Simulated Command
and Control Task Environment for Experimental Research. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, 54(13):1027–1031,
September 2010. Publisher: SAGE Publications Inc.
336
[177] Charlie Hewitt, Ioannis Politis, Theocharis Amanatidis, and Advait Sarkar.
Assessing public perception of self-driving cars: the autonomous vehicle accep-
tance model. In Proceedings of the 24th International Conference on Intelligent
User Interfaces, IUI ’19, pages 518–527, New York, NY, USA, March 2019.
Association for Computing Machinery.
[178] Ernest R. Hilgard. Hypnotic susceptibility. Hypnotic susceptibility. Harcourt,
Brace & World, Oxford, England, 1965. Pages: xiii, 434.
[179] Ernest R. Hilgard, Andr´e M. Weitzenhoffer, Judah Landes, and Rosemarie K.
Moore. The distribution of susceptibility to hypnosis in a student population:
A study using the Stanford Hypnotic Susceptibility Scale. Psychological Mono-
graphs: General and Applied, 75(8):1–22, 1961. Place: US Publisher: American
Psychological Association.
[180] Geoffrey Ho, Liana Maria Kiff, Tom Plocher, and Karen Zita Haigh. A Model
of Trust and Reliance of Automation Technology for Older Users. page 6.
[181] Rashina Hoda, James Noble, and Stuart Marshall. Self-Organizing Roles on Ag-
ile Software Development Teams. IEEE Transactions on Software Engineering,
39(3):422–444, March 2013. Conference Name: IEEE Transactions on Software
Engineering.
[182] Michael A. Hogg. Influence and leadership. In Handbook of social psychology,
Vol. 2, 5th ed, pages 1166–1207. John Wiley & Sons, Inc., Hoboken, NJ, US,
2010.
[183] Eric Holder, Lixiao Huang, Erin Chiou, Myounghoon Jeon, and Joseph B.
Lyons. Designing for Bi-Directional Transparency in Human-AI-Robot-
Teaming. Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, 65(1):57–61, September 2021. Publisher: SAGE Publications Inc.
[184] Joo-Wha Hong. With great power comes great responsibility: inquiry into
the social roles and the power dynamics in human-ai interactions. Journal of
Control and Decision, pages 1–8, 2021.
[185] Joo-Wha Hong, Sukyoung Choi, and Dmitri Williams. Sexist AI: An Experi-
ment Integrating CASA and ELM. International Journal of Human–Computer
Interaction, 36(20):1928–1941, December 2020. Publisher: Taylor & Francis
eprint: https://doi.org/10.1080/10447318.2020.1801226.
[186] Se-Joon Hong, Carrie Siu Man Lui, Jungpil Hahn, Jae Yun Moon, and Tai Gyu
Kim. How old are you really? Cognitive age in technology acceptance. Decision
Support Systems, 56:122–130, December 2013.
337
[187] Weiyin Hong, James Y.L. Thong, Wai-Man Wong, and Kar-Yan Tam. Deter-
minants of User Acceptance of Digital Libraries: An Empirical Examination of
Individual Differences and System Characteristics. Journal of Management In-
formation Systems, 18(3):97–124, January 2002. Publisher: Routledge eprint:
https://doi.org/10.1080/07421222.2002.11045692.
[188] Ahmed Hosny, Chintan Parmar, John Quackenbush, Lawrence H. Schwartz,
and Hugo J. W. L. Aerts. Artificial intelligence in radiology. Nature Reviews
Cancer, 18(8):500–510, August 2018. Number: 8 Publisher: Nature Publishing
Group.
[189] Paul Jen-Hwa Hu, Theodore H. K. Clark, and Will W. Ma. Examining tech-
nology acceptance by school teachers: a longitudinal study. Information &
Management, 41(2):227–241, December 2003.
[190] Christopher Hundhausen and Sarah Douglas. Shifting from ”high fidelity” to
”low fidelity” algorithm visualization technology. In CHI ’00 Extended Abstracts
on Human Factors in Computing Systems, CHI EA ’00, pages 179–180, New
York, NY, USA, April 2000. Association for Computing Machinery.
[191] Sabina Hunziker, Anna C. Johansson, Franziska Tschan, Norbert K. Semmer,
Laura Rock, Michael D. Howell, and Stephan Marsch. Teamwork and Lead-
ership in Cardiopulmonary Resuscitation. Journal of the American College of
Cardiology, 57(24):2381–2388, June 2011. Publisher: American College of Car-
diology Foundation.
[192] Dietmar H¨ubner. Two Kinds of Discrimination in AI-Based Penal Decision-
Making. ACM SIGKDD Explorations Newsletter, 23(1):4–13, May 2021.
[193] Robert G Isaac, Wilfred J Zerbe, and Douglas C Pitt. Leadership and moti-
vation: The effective application of expectancy theory. Journal of managerial
issues, pages 212–226, 2001. Publisher: JSTOR.
[194] Rozemarijn Janss, Sonja Rispens, Mien Segers, and Karen A Jehn. What
is happening under the surface? Power, conflict and the performance
of medical teams. Medical Education, 46(9):838–849, 2012. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-2923.2012.04322.x.
[195] Mohammad Hossein Jarrahi. Artificial intelligence and the future of work:
Human-AI symbiosis in organizational decision making. Business Horizons,
61(4):577–586, July 2018.
[196] Guillermina Jasso. Factorial Survey Methods for Studying Beliefs and Judg-
ments. Sociological Methods & Research, 34(3):334–423, February 2006. Pub-
lisher: SAGE Publications Inc.
338
[197] Johanna Jauernig, Matthias Uhl, and Gari Walkowitz. People Prefer Moral Dis-
cretion to Algorithms: Algorithm Aversion Beyond Intransparency. Philosophy
& Technology, 35(1):2, January 2022.
[198] Alexander Jensen. Towards Verifying a Blocks World for Teams GOAL Agent:.
In Proceedings of the 13th International Conference on Agents and Artificial
Intelligence, pages 337–344, Online Streaming, Select a Country —, 2021.
SCITEPRESS - Science and Technology Publications.
[199] Shu Jiang and Ronald C. Arkin. Mixed-Initiative Human-Robot Interaction:
Definition, Taxonomy, and Survey. In 2015 IEEE International Conference on
Systems, Man, and Cybernetics, pages 954–961, October 2015.
[200] Xin Jiang. How to motivate people working in teams. International Journal of
business and Management, 5(10):223, 2010. Publisher: Citeseer.
[201] S. Venus Jin, Aziz Muqaddam, and Ehri Ryu. Instafamous and social media in-
fluencer marketing. Marketing Intelligence & Planning, 37(5):567–579, January
2019. Publisher: Emerald Publishing Limited.
[202] Alan R. Johnson, Rens van de Schoot, Fed´eric Delmar, and William D. Crano.
Social Influence Interpretation of Interpersonal Processes and Team Perfor-
mance Over Time Using Bayesian Model Selection. Journal of Management,
41(2):574–606, February 2015. Publisher: SAGE Publications Inc.
[203] Matthew Johnson, Catholijn Jonker, Birna van Riemsdijk, Paul J. Feltovich,
and Jeffrey M. Bradshaw. Joint Activity Testbed: Blocks World for Teams
(BW4T). In Huib Aldewereld, Virginia Dignum, and Gauthier Picard, edi-
tors, Engineering Societies in the Agents World X, Lecture Notes in Computer
Science, pages 254–256, Berlin, Heidelberg, 2009. Springer.
[204] Patsy E. Johnson and Susan J. Scollay. School-based, decision-making councils
Conflict, leader power and social influence in the vertical team. Journal of
Educational Administration, 39(1):47–66, January 2001. Publisher: MCB UP
Ltd.
[205] Karen S. Johnson-Cartee and Gary A. Copeland. Strategic Political Communi-
cation: Rethinking Social Influence, Persuasion, and Propaganda. Rowman &
Littlefield Publishers, October 2003. Google-Books-ID: FVx7AAAAQBAJ.
[206] Patrik Jonell, Anna Deichler, Ilaria Torre, Iolanda Leite, and Jonas Beskow.
Mechanical Chameleons: Evaluating the effects of a social robot’s non-verbal
behavior on social influence. arXiv:2109.01206 [cs], September 2021. arXiv:
2109.01206.
339
[207] Ricardo Jota, Pedro Lopes, and Joaquim Jorge. I, the device: observing human
aversion from an HCI perspective. In CHI ’12 Extended Abstracts on Human
Factors in Computing Systems, CHI EA ’12, pages 261–270, New York, NY,
USA, May 2012. Association for Computing Machinery.
[208] James W Julian, Doyle W Bishop, and Fred E Fiedler. Quasitherapeutic ef-
fects of intergroup competition. Journal of Personality and Social Psychology,
3(3):321, 1966.
[209] Thomas Jønsson and Hans Jeppe Jeppesen. Under the influence of the team?
An investigation of the relationships between team autonomy, individual au-
tonomy and social influence within teams. The International Journal of Hu-
man Resource Management, 24(1):78–93, January 2013. Publisher: Routledge
eprint: https://doi.org/10.1080/09585192.2012.672448.
[210] Brian Kalis, Matt Collier, and Richard Fu. 10 Promising AI Applications in
Health Care. page 5, 2018.
[211] Martin F. Kaplan and Charles E. Miller. Group decision making and normative
versus informational influence: Effects of type of issue and assigned decision
rule. Journal of Personality and Social Psychology, 53(2):306–313, 1987. Place:
US Publisher: American Psychological Association.
[212] Nancy Katz. Sports teams as a model for workplace teams: Lessons and liabil-
ities. Academy of Management Perspectives, 15(3):56–67, August 2001. Pub-
lisher: Academy of Management.
[213] Yarden Katz. Manufacturing an Artificial Intelligence Revolution. SSRN
Scholarly Paper ID 3078224, Social Science Research Network, Rochester, NY,
November 2017.
[214] Patrick Kenny, Albert A Rizzo, Thomas D Parsons, Jonathan Gratch, and
William Swartout. A virtual human agent for training novice therapists clinical
interviewing skills. Annual Review of CyberTherapy and Telemedicine, 5:77–83,
2007. Publisher: Interactive Media Institute.
[215] Otto F. Kernberg. Ideology, conflict, and leadership in groups and organizations.
Ideology, conflict, and leadership in groups and organizations. Yale University
Press, New Haven, CT, US, 1998. Pages: xii, 321.
[216] Arash Keshavarzi Arshadi, Julia Webb, Milad Salem, Emmanuel Cruz, Sta-
cie Calad-Thomson, Niloofar Ghadirian, Jennifer Collins, Elena Diez-Cecilia,
Brendan Kelly, Hani Goodarzi, and Jiann Shiun Yuan. Artificial Intelligence
for COVID-19 Drug Discovery and Vaccine Development. Frontiers in Artificial
Intelligence, 3, 2020.
340
[217] Odai Khasawneh. The Conceptual Gap Between Technophobia and Computer
Anxiety and Its Empirical Consequences. March 2018.
[218] Odai Y. Khasawneh. Technophobia without boarders: The influence of techno-
phobia and emotional intelligence on technology acceptance and the moderating
influence of organizational climate. Computers in Human Behavior, 88:210–218,
November 2018.
[219] Sarah Khatry. Facebook and Pandora’s box: How using Big Data and Artificial
Intelligence in advertising resulted in housing discrimination. Applied Marketing
Analytics, 6(1):37–45, January 2020.
[220] Diksha Khurana, Aditya Koli, Kiran Khatter, and Sukhdev Singh. Natural
language processing: State of the art, current trends and challenges. Multimedia
Tools and Applications, pages 1–32, 2022.
[221] John F. Kihlstrom. Hypnosis. Annual Review of Psychology, 36(1):385–418,
1985.
eprint: https://doi.org/10.1146/annurev.ps.36.020185.002125.
[222] Bae Sung Kim and Hyung Jin Woo. A Study on the Intention to Use AI
Speakers: focusing on extended technology acceptance model. The Journal
of the Korea Contents Association, 19(9):1–10, 2019. Publisher: The Korea
Contents Association.
[223] Chan Young Kim, Jae Kyu Lee, Yoon Ho Cho, and Deok Hwan Kim. VISCORS:
a visual-content recommender for the mobile Web. IEEE Intelligent Systems,
19(6):32–39, November 2004. Conference Name: IEEE Intelligent Systems.
[224] Woo-Hyun Kim and Jong-Hwan Kim. Individualized AI Tutor Based on Devel-
opmental Learning Networks. IEEE Access, 8:27927–27937, 2020. Conference
Name: IEEE Access.
[225] D. LAWRENCE KINCAID. From Innovation to Social Norm:
Bounded Normative Influence. Journal of Health Communication,
9(sup1):37–57, January 2004. Publisher: Taylor & Francis eprint:
https://doi.org/10.1080/10810730490271511.
[226] Hiroaki Kitano, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, and Eiichi Os-
awa. RoboCup: The Robot World Cup Initiative. In Proceedings of the first
international conference on Autonomous agents, AGENTS ’97, pages 340–347,
New York, NY, USA, February 1997. Association for Computing Machinery.
[227] Hiroaki Kitano, Minoru Asada, Yasuo Kuniyoshi, Itsuki Noda, Eiichi Osawa,
and Hitoshi Matsubara. RoboCup: A Challenge Problem for AI. AI Magazine,
18(1):73–73, March 1997. Number: 1.
341
[228] Hiroaki Kitano, Milind Tambe, Peter Stone, Manuela Veloso, Silvia Corade-
schi, Eiichi Osawa, Hitoshi Matsubara, Itsuki Noda, and Minoru Asada. The
RoboCup synthetic agent challenge 97. In Hiroaki Kitano, editor, RoboCup-97:
Robot Soccer World Cup I, Lecture Notes in Computer Science, pages 62–73,
Berlin, Heidelberg, 1998. Springer.
[229] A. Kjoelen, M.J. Thompson, S.E. Umbaugh, R.H. Moss, and W.V. Stoecker.
Performance of AI methods in detecting melanoma. IEEE Engineering in
Medicine and Biology Magazine, 14(4):411–416, July 1995. Conference Name:
IEEE Engineering in Medicine and Biology Magazine.
[230] Richard N. Knowles. Self-Organizing Leadership: A Way of Seeing What
Is Happening in Organizations and a Pathway to Coherence. Emer-
gence, 3(4):112–127, December 2001. Publisher: Routledge eprint:
https://doi.org/10.1207/S15327000EM0304 8.
[231] Haavard Koppang. Social Influence by Manipulation: A Definition and Case
of Propaganda. Middle East Critique, 18(2):117–143, January 2009. Publisher:
Routledge eprint: https://doi.org/10.1080/19436140902989472.
[232] James R. Jr Korndorffer, Mary T. Hawn, David A. Spain, Lisa M. Knowlton,
Dan E. Azagury, Aussama K. Nassar, James N. Lau, Katherine D. Arnow,
Amber W. Trickey, and Carla M. Pugh. Situating Artificial Intelligence in
Surgery: A Focus on Disease Severity. Annals of Surgery, 272(3):523–528,
September 2020.
[233] Joseph Kramer, Sunil Noronha, and John Vergo. A user-centered design ap-
proach to personalization. Communications of the ACM, 43(8):44–48, 2000.
Publisher: ACM New York, NY, USA.
[234] Martin Krzywdzinski. Automation, Digitalization, and Changes in Occupa-
tional Structures in the Automobile Industry in Germany, the United States,
and Japan: A Brief History from the Early 1990s Until 2018, volume 10 of
Weizenbaum Series. Weizenbaum Institute for the Networked Society - The
German Internet Institute, Berlin, 2020.
[235] Logan Kugler. AI judges and juries. Communications of the ACM, 61(12):19–
21, November 2018.
[236] Philipp Kulms and Stefan Kopp. More Human-Likeness, More Trust? The Ef-
fect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued
and Interdependent Human-Agent Cooperation. In Proceedings of Mensch und
Computer 2019, MuC’19, pages 31–42, New York, NY, USA, September 2019.
Association for Computing Machinery.
342
[237] Ram L Kumar, Michael Alan Smith, and Snehamay Bannerjee. User interface
features influencing overall ease of use and personalization. Information &
Management, 41(3):289–302, 2004. Publisher: Elsevier.
[238] Yi Lai, Atreyi Kankanhalli, and Desmond Ong. Human-AI Collaboration in
Healthcare: A Review and Research Agenda. Hawaii International Conference
on System Sciences 2021 (HICSS-54), January 2021.
[239] Stephen A. Latour and Ajay K. Manrai. Interactive Impact of Informational and
Normative Influence on Donations. Journal of Marketing Research, 26(3):327–
335, August 1989. Publisher: SAGE Publications Inc.
[240] Tricia M. Leahey, Rajiv Kumar, Brad M. Weinberg, and Rena R. Wing.
Teammates and Social Influence Affect Weight Loss Outcomes in a Team-
Based Weight Loss Competition. Obesity, 20(7):1413–1418, 2012. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1038/oby.2012.18.
[241] Chaiwoo Lee, Bobbie Seppelt, Bryan Reimer, Bruce Mehler, and Joseph F.
Coughlin. Acceptance of Vehicle Automation: Effects of Demographic Traits,
Technology Experience and Media Exposure. Proceedings of the Human Fac-
tors and Ergonomics Society Annual Meeting, 63(1):2066–2070, November 2019.
Publisher: SAGE Publications Inc.
[242] Se-Lee Lee. The Meanings of Fashion on the Social Media of Virtual Influencer
Lil Miquela. Journal of Digital Convergence, 19(9):323–333, 2021. Publisher:
The Society of Digital Policy and Management.
[243] Younghwa Lee, Kenneth A. Kozar, and Kai R.T. Larsen. The Technology Ac-
ceptance Model: Past, Present, and Future. Communications of the Association
for Information Systems, 12, 2003.
[244] M. Leonard, S. Graham, and D. Bonacum. The human factor: the critical
importance of effective teamwork and communication in providing safe care.
BMJ Quality & Safety, 13(suppl 1):i85–i90, October 2004. Publisher: BMJ
Publishing Group Ltd Section: Original Article.
[245] Wassily Leontief and Faye Duchin. The Impacts of Automation on Employ-
ment, 1963-2000. Final Report. April 1984. Publisher: Institute for Economic
Analysis, New York University, 269 Mercer Street, 2nd Floor, New York, NY
10003 ($15.
[246] Jeffrey A LePine. Team adaptation and postchange performance: effects of
team composition in terms of members’ cognitive ability and personality. Jour-
nal of applied psychology, 88(1):27, 2003. Publisher: American Psychological
Association.
343
[247] Jeffrey A. LePine, Mary Ann Hanson, Walter C. Borman, and Stephan J. Mo-
towidlo. Contextual performance and teamwork: Implications for staffing. In
Research in Personnel and Human Resources Management, volume 19 of Re-
search in Personnel and Human Resources Management, pages 53–90. Emerald
Group Publishing Limited, January 2000.
[248] Amanda Levendowski. How Copyright Law Can Fix Artificial Intelligence’s
Implicit Bias Problem. Washington Law Review, 93(2):579–630, 2018.
[249] Tianyi Li, Mihaela Vorvoreanu, Derek DeBellis, and Saleema Amershi. As-
sessing human-ai interaction early through factorial surveys: A study on the
guidelines for human-ai interaction. ACM Transactions on Computer-Human
Interaction, 2022.
[250] Claire Liang, Julia Proft, Erik Andersen, and Ross A. Knepper. Implicit Com-
munication of Actionable Information in Human-AI teams. In Proceedings of the
2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, pages
1–13, New York, NY, USA, May 2019. Association for Computing Machinery.
[251] Han-Wei Liu, Ching-Fu Lin, and Yu-Jie Chen. Beyond State v Loomis: artifi-
cial intelligence, government algorithmization and accountability. International
Journal of Law and Information Technology, 27(2):122–141, June 2019.
[252] Ji Liu and Yunpeng Zhao. Role-oriented Task Allocation in Human-Machine
Collaboration System. In 2021 IEEE 4th International Conference on Infor-
mation Systems and Computer Aided Education (ICISCAE), pages 243–248,
September 2021.
[253] Liping Liu and Qingxiong Ma. Perceived system performance: a test of an ex-
tended technology acceptance model. ACM SIGMIS Database: the DATABASE
for Advances in Information Systems, 37(2-3):51–59, 2006.
[254] Peng Liu, Run Yang, and Zhigang Xu. How Safe Is Safe Enough for
Self-Driving Vehicles? Risk Analysis, 39(2):315–325, 2019. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/risa.13116.
[255] Peng Liu, Yawen Zhang, and Zhen He. The effect of population age on the ac-
ceptable safety of self-driving vehicles. Reliability Engineering & System Safety,
185:341–347, May 2019.
[256] Xiangmin Liu and Rosemary Batt. How Supervisors Influence Performance:
A Multilevel Study of Coaching and Group Management in Technology-
Mediated Services. Personnel Psychology, 63(2):265–298, 2010. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1744-6570.2010.01170.x.
344
[257] Bredikhina Liudmila. Designing identity in VTuber era. In Virtual Reality
International Conference Proceedings, 2020.
[258] G.S. Loo, B.C.P. Tang, and L. Janczewski. An adaptable human-agent collab-
oration information system in manufacturing (HACISM). In Proceedings 11th
International Workshop on Database and Expert Systems Applications, pages
445–449, September 2000. ISSN: 1529-4188.
[259] Jeremy Lopez, Claire Textor, Beau Schelble, Rui Zhang, Richard Pak, Nathan J.
McNeese, and Guo Freeman. Examining the Relationship Between an Au-
tonomous Teammate’s Ethical Decision Making and Trust. In Technology,
Mind, and Behavior, November 2021.
[260] Kenneth R. Lord, Myung-Soo Lee, and Peggy Choong. Differences in Normative
and Informational Social Influence. ACR North American Advances, NA-28,
2001.
[261] D. Lorenˇc´ık, M. Tarhaniˇcov´a, and P. Sinˇak. Influence of Sci-Fi films on artificial
intelligence and vice-versa. In 2013 IEEE 11th International Symposium on
Applied Machine Intelligence and Informatics (SAMI), pages 27–31, January
2013.
[262] Dominic Loske and Matthias Klumpp. Intelligent and efficient? an empirical
analysis of human–ai collaboration for truck drivers in retail logistics. The
International Journal of Logistics Management, 2021.
[263] Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J.
Cai. Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative
Models. In Proceedings of the 2020 CHI Conference on Human Factors in
Computing Systems, pages 1–13. Association for Computing Machinery, New
York, NY, USA, April 2020.
[264] June Lu, Chun-Sheng Yu, Chang Liu, and James E. Yao. Technology acceptance
model for wireless Internet. Internet Research, 13(3):206–222, January 2003.
Publisher: MCB UP Ltd.
[265] Zhicong Lu, Chenxinran Shen, Jiannan Li, Hong Shen, and Daniel Wigdor.
More Kawaii than a Real-Person Live Streamer: Understanding How the Otaku
Community Engages with and Perceives Virtual YouTubers. In Proceedings
of the 2021 CHI Conference on Human Factors in Computing Systems, CHI
’21, pages 1–14, New York, NY, USA, May 2021. Association for Computing
Machinery.
[266] Joseph B Lyons, Nhut T Ho, William E Fergueson, Garrett G Sadler, Saman-
tha D Cals, Casey E Richardson, and Mark A Wilkins. Trust of an automatic
345
ground collision avoidance technology: A fighter pilot perspective. Military
Psychology, 28(4):271–277, 2016.
[267] Joseph B Lyons, Nhut T Ho, Kolina S Koltai, Gina Masequesmay, Mark Skoog,
Artemio Cacanindin, and Walter W Johnson. Trust-based analysis of an air
force collision avoidance system. Ergonomics in design, 24(1):9–12, 2016.
[268] Ramon Lopez de Mantaras and Josep Lluis Arcos. AI and Music: From Compo-
sition to Expressive Performance. AI Magazine, 23(3):43–43, September 2002.
Number: 3.
[269] James Manyika, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George,
Paul Willmott, and Martin Dewhurst. A future that works: AI, automation,
employment, and productivity. McKinsey Global Institute Research, Tech. Rep,
60:1–135, 2017.
[270] V. J. Mar and H. P. Soyer. Artificial intelligence for melanoma diagnosis: how
can we deliver on the promise? Annals of Oncology, 29(8):1625–1628, August
2018. Publisher: Elsevier.
[271] Ivana Markoa. Persuasion and Propaganda. Diogenes, 55(1):37–51, February
2008. Publisher: SAGE Publications Ltd.
[272] Michelle A Marks, Stephen J Zaccaro, and John E Mathieu. Performance impli-
cations of leader briefings and team-interaction training for team adaptation to
novel environments. Journal of applied psychology, 85(6):971, 2000. Publisher:
American Psychological Association.
[273] Briance Mascarenhas. The coordination of manufacturing interdependence in
multinational companies. Journal of International Business Studies, 15(3):91–
106, 1984.
[274] Vinayak Mathur, Yannis Stavrakas, and Sanjay Singh. Intelligence analysis
of Tay Twitter bot. In 2016 2nd International Conference on Contemporary
Computing and Informatics (IC3I), pages 231–236, December 2016.
[275] Gerald Matthews, Jinchao Lin, April Rose Panganiban, and Michael D. Long.
Individual Differences in Trust in Autonomous Robots: Implications for Trans-
parency. IEEE Transactions on Human-Machine Systems, 50(3):234–244, June
2020. Conference Name: IEEE Transactions on Human-Machine Systems.
[276] David Maulsby, Saul Greenberg, and Richard Mander. Prototyping an intelli-
gent agent through Wizard of Oz. In Proceedings of the INTERACT ’93 and
CHI ’93 Conference on Human Factors in Computing Systems, CHI ’93, pages
277–284, New York, NY, USA, May 1993. Association for Computing Machin-
ery.
346
[277] Joseph A. Maxwell. Qualitative research design: An interactive approach, vol-
ume 41. SAGE Publications, Inc., 2012.
[278] Daniel J. McFarland. The Role of Age and Efficacy on Technology Acceptance:
Implications for E-Learning. 2001.
[279] Geraldine B McGinty and Bibb Allen. The acr data science institute and ai
advisory group: harnessing the power of artificial intelligence to improve patient
care. Journal of the American College of Radiology, 15(3):577–579, 2018.
[280] Fenwick McKelvey and Elizabeth Dubois. Computational propaganda in
canada: The use of political bots. 2017.
[281] Alexander McLeod, Sonja Pippin, and Vittoria Catania. Using Technology Ac-
ceptance Theory to Model Individual Differences in Tax Software Use. AMCIS
2009 Proceedings, January 2009.
[282] Michael D. McNeese, Priya Bains, Isaac Brewer, Cliff Brown, Erik S. Connors,
Tyrone Jefferson, Rashaad E.T. Jones, and Ivanna Terrell. The Neocities Simu-
lation: Understanding the Design and Experimental Methodology Used to De-
velop a Team Emergency Management Simulation. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 49(3):591–594, September
2005. Publisher: SAGE Publications Inc.
[283] Nathan McNeese, Mustafa Demir, Erin Chiou, Nancy Cooke, and Giovanni
Yanikian. Understanding the Role of Trust in Human-Autonomy Teaming.
January 2019. Accepted: 2019-01-02T23:39:25Z.
[284] Nathan J. McNeese, Mustafa Demir, Erin K. Chiou, and Nancy J. Cooke. Trust
and Team Performance in Human–Autonomy Teaming. International Jour-
nal of Electronic Commerce, 25(1):51–72, January 2021. Publisher: Routledge
eprint: https://doi.org/10.1080/10864415.2021.1846854.
[285] Nathan J. McNeese, Mustafa Demir, Nancy J. Cooke, and Christopher Myers.
Teaming With a Synthetic Teammate: Insights into Human-Autonomy Team-
ing. Human Factors, 60(2):262–273, March 2018. Publisher: SAGE Publications
Inc.
[286] Nathan J. McNeese, Beau G. Schelble, Lorenzo Barberis Canonico, and Mustafa
Demir. Who/What Is My Teammate? Team Composition Considerations
in Human–AI Teaming. IEEE Transactions on Human-Machine Systems,
51(4):288–299, August 2021. Conference Name: IEEE Transactions on Human-
Machine Systems.
347
[287] Roshanak Mehdipanah, Kiana Bess, Steve Tomkowiak, Audrey Richardson,
Carmen Stokes, Denise White Perkins, Suzanne Cleage, Barbara A. Israel, and
Amy J. Schulz. Residential Racial and Socioeconomic Segregation as Predic-
tors of Housing Discrimination in Detroit Metropolitan Area. Sustainability,
12(24):10429, January 2020. Number: 24 Publisher: Multidisciplinary Digital
Publishing Institute.
[288] Joseph E. Mercado, Michael A. Rupp, Jessie Y. C. Chen, Michael J. Barnes,
Daniel Barber, and Katelyn Procci. Intelligent Agent Transparency in Hu-
man–Agent Teaming for Multi-UxV Management. Human Factors, 58(3):401–
415, May 2016. Publisher: SAGE Publications Inc.
[289] Stephanie M. Merritt, Heather Heimbaugh, Jennifer LaChapell, and Deborah
Lee. I Trust It, but I Don’t Know Why: Effects of Implicit Attitudes Toward
Automation on Trust in an Automated System. Human Factors, 55(3):520–534,
June 2013. Publisher: SAGE Publications Inc.
[290] Paul Messaris. Visual Persuasion: The Role of Images in Advertising. SAGE,
1997. Google-Books-ID: OQ5TPWYSndwC.
[291] Catherine L. Midla. Marketing Identity: Barbie, Lil Miquela, and Social Influ-
encers. Master’s thesis, Pratt Institute, United States – New York, 2021. ISBN:
9798516064623.
[292] Bradley N. Miller, Istvan Albert, Shyong K. Lam, Joseph A. Konstan, and
John Riedl. MovieLens unplugged: experiences with an occasionally connected
recommender system. In Proceedings of the 8th international conference on In-
telligent user interfaces, IUI ’03, pages 263–266, New York, NY, USA, January
2003. Association for Computing Machinery.
[293] R Mirnezami and A Ahmed. Surgery 3.0, artificial intelligence and the next-
generation surgeon. British Journal of Surgery, 105(5):463–465, April 2018.
[294] Nils Brede Moe, Torgeir Dingsøyr, and Tore Dyb˚a. Understanding Self-
Organizing Teams in Agile Software Development. In 19th Australian Con-
ference on Software Engineering (aswec 2008), pages 76–85, March 2008. ISSN:
2377-5408.
[295] Andrew Monk, Marc Hassenzahl, Mark Blythe, and Darren Reed. Funology:
designing enjoyment. In CHI ’02 Extended Abstracts on Human Factors in
Computing Systems, CHI EA ’02, pages 924–925, New York, NY, USA, April
2002. Association for Computing Machinery.
[296] Rosemarie K. Moore. Susceptibility to hypnosis and susceptibility to social
influence. The Journal of Abnormal and Social Psychology, 68(3):282–294, 1964.
Place: US Publisher: American Psychological Association.
348
[297] Lisa M. Moynihan, Mark V. Roehling, Marcie A. LePine, and Wendy R.
Boswell. A Longitudinal Study of the Relationships Among Job Search Self-
Efficacy, Job Interviews, and Employment Outcomes. Journal of Business and
Psychology, 18(2):207–233, December 2003.
[298] Geoff Musick, Thomas A. O’Neill, Beau G. Schelble, Nathan J. McNeese, and
Jonn B. Henke. What Happens When Humans Believe Their Teammate is an
AI? An Investigation into Humans Teaming with Autonomy. Computers in
Human Behavior, 122:106852, 2021. Publisher: Elsevier.
[299] Christopher Myers, Jerry Ball, Nancy Cooke, Mary Freiman, Michelle Caisse,
Stuart Rodgers, Mustafa Demir, and Nathan McNeese. Autonomous intelligent
agents for team training. IEEE Intelligent Systems, 34(2):3–14, 2018. Publisher:
IEEE.
[300] Juana Isabel endez, Omar Mata, Pedro Ponce, Alan Meier, Therese Peffer,
and Arturo Molina. Multi-sensor System, Gamification, and Artificial Intelli-
gence for Benefit Elderly People. In Hiram Ponce, Lourdes Mart´ınez-Villase˜nor,
Jorge Brieva, and Ernesto Moya-Albor, editors, Challenges and Trends in Mul-
timodal Fall Detection for Healthcare, Studies in Systems, Decision and Control,
pages 207–235. Springer International Publishing, Cham, 2020.
[301] Tom Nadarzynski, Oliver Miles, Aimee Cowie, and Damien Ridge. Acceptabil-
ity of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-
methods study. DIGITAL HEALTH, 5:2055207619871808, January 2019. Pub-
lisher: SAGE Publications Ltd.
[302] Idah Naile and Jacob M. Selesho. The Role of Leadership in Employee Motiva-
tion. Mediterranean Journal of Social Sciences, 5(3):175, March 2014. Number:
3.
[303] Mark A. Neerincx, Jasper van der Waa, Frank Kaptein, and Jurriaan van Digge-
len. Using Perceptual and Cognitive Explanations for Enhanced Human-Agent
Team Performance. In Don Harris, editor, Engineering Psychology and Cog-
nitive Ergonomics, Lecture Notes in Computer Science, pages 204–214, Cham,
2018. Springer International Publishing.
[304] Michael A. Nees. Acceptance of Self-driving Cars: An Examination of Ide-
alized versus Realistic Portrayals with a Self- driving Car Acceptance Scale.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
60(1):1449–1453, September 2016. Publisher: SAGE Publications Inc.
[305] Gina Neff. Managing AI When New Technologies Meet Old Workplaces. SASE,
June 2019.
349
[306] Gina Neff and Peter Nagy. Automation, Algorithms, and Politics| Talking
to Bots: Symbiotic Agency and the Case of Tay. International Journal of
Communication, 10(0):17, October 2016. Number: 0.
[307] Jochen Nelles, Sinem Kuz, Alexander Mertens, and Christopher M Schlick.
Human-centered design of assistance systems for production planning and con-
trol: The role of the human in industry 4.0. In 2016 IEEE International Con-
ference on Industrial Technology (ICIT), pages 2099–2104. IEEE, 2016.
[308] Barbara Barbosa Neves, Fausto Amaro, and Jaime R. S. Fonseca. Coming
of (Old) Age in the Digital Age: ICT Usage and Non-Usage among Older
Adults. Sociological Research Online, 18(2):22–35, May 2013. Publisher: SAGE
Publications Ltd.
[309] Nhan Nguyen and Sarah Nadi. An Empirical Evaluation of GitHub Copilot’s
Code Suggestions. In 2022 IEEE/ACM 19th International Conference on Min-
ing Software Repositories (MSR), pages 1–5, May 2022. ISSN: 2574-3864.
[310] Bj¨orn Niehaves and Ralf Plattfaut. Internet adoption by the elderly: employ-
ing IS technology acceptance theories for understanding the age-related digital
divide. European Journal of Information Systems, 23(6):708–726, November
2014.
[311] Jakob Nielsen. Usability Engineering. Morgan Kaufmann, October 1994.
Google-Books-ID: 95As2OF67f0C.
[312] A. Nigam. Multiple and Competing Goals in Organizations: Insights for Medical
Leaders. BMJ Leader, 2(3):85–86, September 2018. Number: 3 Publisher: BMJ.
[313] Galit Nimrod. Technophobia among older Internet users. Educational
Gerontology, 44(2-3):148–162, March 2018. Publisher: Routledge eprint:
https://doi.org/10.1080/03601277.2018.1428145.
[314] Richard E. Nisbett and Andrew Gordon. Self-esteem and susceptibility to social
influence. Journal of Personality and Social Psychology, 5(3):268–276, 1967.
Place: US Publisher: American Psychological Association.
[315] Jessica M. Nolan, P. Wesley Schultz, Robert B. Cialdini, Noah J. Goldstein, and
Vladas Griskevicius. Normative Social Influence is Underdetected. Personality
and Social Psychology Bulletin, 34(7):913–923, July 2008. Publisher: SAGE
Publications Inc.
[316] T. Nomura, T. Kanda, T. Suzuki, and K. Kato. Psychology in human-
robot communication: an attempt through investigation of negative attitudes
and anxiety toward robots. In RO-MAN 2004. 13th IEEE International
350
Workshop on Robot and Human Interactive Communication (IEEE Catalog
No.04TH8759), pages 35–40, September 2004.
[317] Edward Godfrey Ochieng and Andrew David Price. Framework for manag-
ing multicultural project teams. Engineering, Construction and Architectural
Management, 2009.
[318] Jessica Ochmann, Leonard Michels, Sandra Zilker, Verena Tiefenbeck, and Sven
Laumer. The influence of algorithm aversion and anthropomorphic agent design
on the acceptance of AI-based job recommendations. December 2020.
[319] Office of Naval Research. Programs - Human Interaction with Autonomous
Systems - Office of Naval Research.
[320] Office of Naval Research. Programs - Science of Autonomy - Office of Naval
Research.
[321] Air Force Office of Scientific Research. Autonomy, Cognitive Sciences, and
Human Factors - Research Areas - AFOSR.
[322] Air Force Office of Scientific Research. Trust and Influence - Research Areas -
AFOSR.
[323] Changhoon Oh, Taeyoung Lee, Yoojung Kim, SoHyun Park, Saebom Kwon,
and Bongwon Suh. Us vs. Them: Understanding Artificial Intelligence Techno-
phobia over the Google DeepMind Challenge Match. In Proceedings of the 2017
CHI Conference on Human Factors in Computing Systems, CHI ’17, pages
2523–2534, New York, NY, USA, May 2017. Association for Computing Ma-
chinery.
[324] Jukka-Pekka Onnela and Felix Reed-Tsochas. Spontaneous emergence of social
influence in online systems. Proceedings of the National Academy of Sciences,
107(43):18375–18380, October 2010. Publisher: Proceedings of the National
Academy of Sciences.
[325] John O’Shaugnessy and Nicholas O’Shaughnessy. Persuasion in Advertising.
Routledge, London, November 2003.
[326] Amy L. Ostrom, Darima Fotheringham, and Mary Jo Bitner. Customer Ac-
ceptance of AI in Service Encounters: Understanding Antecedents and Conse-
quences. In Paul P. Maglio, Cheryl A. Kieliszewski, James C. Spohrer, Kelly
Lyons, Lia Patr´ıcio, and Yuriko Sawatani, editors, Handbook of Service Science,
Volume II, Service Science: Research and Innovations in the Service Economy,
pages 77–103. Springer International Publishing, Cham, 2019.
351
[327] Nelly Oudshoorn and Trevor Pinch. How Users Matter: The Co-Construction
of Users and Technology (Inside Technology). The MIT Press, 2003.
[328] A. Ant Ozok, Quyin Fan, and Anthony F. Norcio. Design guidelines for effective
recommender system interfaces based on a usability criteria conceptual model:
results from a college student population. Behaviour & Information Technology,
29(1):57–83, January 2010.
[329] Thomas O’Neill, Nathan McNeese, Amy Barron, and Beau Schelble. Hu-
man–Autonomy Teaming: A Review and Analysis of the Empirical Literature.
Human Factors, page 0018720820960865, October 2020. Publisher: SAGE Pub-
lications Inc.
[330] Rohan Paleja, Muyleng Ghuy, Nadun Ranawaka Arachchige, Reed Jensen, and
Matthew Gombolay. The utility of explainable ai in ad hoc human-machine
teaming. Advances in Neural Information Processing Systems, 34, 2021.
[331] Raja Parasuraman and Dietrich H Manzey. Complacency and bias in human
use of automation: An attentional integration. Human factors, 52(3):381–410,
2010.
[332] Raja Parasuraman, Thomas B Sheridan, and Christopher D Wickens. A model
for types and levels of human interaction with automation. IEEE Transactions
on systems, man, and cybernetics-Part A: Systems and Humans, 30(3):286–297,
2000.
[333] Ravi B Parikh, Stephanie Teeple, and Amol S Navathe. Addressing bias in
artificial intelligence in health care. Jama, 322(24):2377–2378, 2019.
[334] Hyanghee Park, Daehwan Ahn, Kartik Hosanagar, and Joonhwan Lee. Human-
AI Interaction in Human Resource Management: Understanding Why Employ-
ees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens.
In Proceedings of the 2021 CHI Conference on Human Factors in Computing
Systems, CHI ’21, pages 1–15, New York, NY, USA, May 2021. Association for
Computing Machinery.
[335] Sun Young Park, Pei-Yi Kuo, Andrea Barbarin, Elizabeth Kaziunas, Astrid
Chow, Karandeep Singh, Lauren Wilcox, and Walter S Lasecki. Identifying
challenges and opportunities in human-ai collaboration in healthcare. In Con-
ference Companion Publication of the 2019 on Computer Supported Cooperative
Work and Social Computing, pages 506–510, 2019.
[336] Carlos M. Parra, Manjul Gupta, and Denis Dennehy. Likelihood of Question-
ing AI-based Recommendations Due to Perceived Racial/Gender Bias. IEEE
Transactions on Technology and Society, pages 1–1, 2021. Conference Name:
IEEE Transactions on Technology and Society.
352
[337] David R. Patterson, Mark P. Jensen, Shelley A. Wiechman, and Sam R.
Sharar. Virtual Reality Hypnosis for Pain Associated With Recovery
From Physical Trauma. International Journal of Clinical and Experimen-
tal Hypnosis, 58(3):288–300, May 2010. Publisher: Routledge eprint:
https://doi.org/10.1080/00207141003760595.
[338] David R. Patterson, Jennifer R. Tininenko, Anne E. Schmidt,
and Sam R. Sharar. Virtual Reality Hypnosis: A Case Re-
port. International Journal of Clinical and Experimental Hyp-
nosis, 52(1):27–38, January 2004. Publisher: Routledge eprint:
https://www.tandfonline.com/doi/pdf/10.1076/iceh.52.1.27.23925.
[339] Michelle M. Patterson, Albert V. Carron, and Todd M. Loughead. The influence
of team norms on the cohesion–self-reported performance relationship: a multi-
level analysis. Psychology of Sport and Exercise, 6(4):479–493, July 2005.
[340] Delroy L. Paulhus, Bryce G. Westlake, Stryker S. Calvez, and P. D. Harms.
Self-presentation style in job interviews: the role of personality and cul-
ture. Journal of Applied Social Psychology, 43(10):2042–2059, 2013. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/jasp.12157.
[341] Cassio Pennachin and Ben Goertzel. Contemporary approaches to artificial
general intelligence. In Artificial general intelligence, pages 1–30. Springer,
2007.
[342] Alex Pentland. Optimized Human-AI Decision Making: A Personal Perspective.
In Proceedings of the 2021 International Conference on Multimodal Interaction,
pages 778–780. Association for Computing Machinery, New York, NY, USA,
October 2021.
[343] Barbara J. Phillips and Edward F. McQuarrie. Narrative and Persuasion in
Fashion Advertising. Journal of Consumer Research, 37(3):368–392, October
2010.
[344] Jonathan D. Pierce, Beverly Rosipko, Lisa Youngblood, Robert C. Gilkeson,
Amit Gupta, and Leonardo Kayat Bittencourt. Seamless Integration of Artifi-
cial Intelligence Into the Clinical Environment: Our Experience With a Novel
Pneumothorax Detection Artificial Intelligence Algorithm. Journal of the Amer-
ican College of Radiology, 18(11):1497–1505, November 2021.
[345] Marc JV Ponsen, H´ector Mu˜noz-Avila, Pieter Spronck, and David W Aha. Au-
tomatically acquiring domain knowledge for adaptive game ai using evolutionary
learning. In Proceedings Of The National Conference On Artificial Intelligence,
volume 20, page 1535. Menlo Park, CA; Cambridge, MA; London; AAAI Press;
MIT Press; 1999, 2005.
353
[346] Rui Prada and Ana Paiva. Teaming up humans with autonomous synthetic
characters. Artificial Intelligence, 173(1):80–103, 2009.
[347] Jorge Pe˜na Queralta, Jenni Raitoharju, Tuan Nguyen Gia, Nikolaos Pas-
salis, and Tomi Westerlund. AutoSOS: Towards Multi-UAV Systems Support-
ing Maritime Search and Rescue with Lightweight AI and Edge Computing.
arXiv:2005.03409 [cs], May 2020. arXiv: 2005.03409.
[348] Shively R. Jay, Summer L. Brandt, Joel Lachter, Mike Matessa, Garrett Sadler,
and Henri Battiste. Application of Human-Autonomy Teaming (HAT) Patterns
to Reduced Crew Operations (RCO). In Don Harris, editor, Engineering Psy-
chology and Cognitive Ergonomics, Lecture Notes in Computer Science, pages
244–255, Cham, 2016. Springer International Publishing.
[349] Martin Ragot, Nicolas Martin, and Salom´e Cojean. Ai-generated vs. human
artworks. a perception bias towards artificial intelligence? In Extended abstracts
of the 2020 CHI conference on human factors in computing systems, pages 1–10,
2020.
[350] Arun Rai. Explainable AI: from black box to glass box. Journal of the Academy
of Marketing Science, 48(1):137–141, January 2020.
[351] Arun Rai, Panos Constantinides, and Saonee Sarker. Next Generation Digital
Platforms:: Toward Human-AI Hybrids. MIS Quarterly, 43(1):iii–ix, March
2019. Publisher: University of Minnesota.
[352] S.m. Rajpara, A.p. Botello, J. Townend, and A.d. Ormerod. Systematic review
of dermoscopy and digital dermoscopy/ artificial intelligence for the diagnosis
of melanoma. British Journal of Dermatology, 161(3):591–604, 2009. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-2133.2009.09093.x.
[353] Nagarajan Ramamoorthy and Patrick C Flood. Individualism/collectivism,
perceived task interdependence and teamwork attitudes among irish blue-collar
employees: a test of the main and moderating effects? Human Relations,
57(3):347–366, 2004.
[354] Gonzalo Ramos, Jina Suh, Soroush Ghorashi, Christopher Meek, Richard
Banks, Saleema Amershi, Rebecca Fiebrink, Alison Smith-Renner, and Gagan
Bansal. Emerging perspectives in human-centered machine learning. In Ex-
tended Abstracts of the 2019 CHI Conference on Human Factors in Computing
Systems, pages 1–8, 2019.
[355] Lisa Rashotte. Social influence. The Blackwell encyclopedia of sociology, 2007.
[356] A. D. Reiling. Courts and Artificial Intelligence Professional Article. Interna-
tional Journal for Court Administration, 11(2):1–10, 2020.
354
[357] Alexander Renkl. Learning from worked-out examples: A study on individual
differences. Cognitive science, 21(1):1–29, 1997.
[358] Joan R. Rentsch and Richard J. Klimoski. Why do ‘great minds’
think alike?: antecedents of team member schema agreement. Jour-
nal of Organizational Behavior, 22(2):107–120, 2001. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1002/job.81.
[359] Abel A Reyes, Colin Elkin, Quamar Niyaz, Xiaoli Yang, Sidike Paheding, and
Vijay K Devabhaktuni. A Preliminary Work on Visualization-based Education
Tool for High School Machine Learning Education. In 2020 IEEE Integrated
STEM Education Conference (ISEC), pages 1–5, August 2020. ISSN: 2330-
331X.
[360] Doug Riecken. Introduction: personalized views of personalization. Commu-
nications of the ACM, 43(8):26–28, 2000. Publisher: ACM New York, NY,
USA.
[361] Mark O Riedl. Human-centered artificial intelligence and machine learning.
Human Behavior and Emerging Technologies, 1(1):33–36, 2019.
[362] Helen Robinson, Anna Wysocka, and Chris Hand. Internet advertising effective-
ness. International Journal of Advertising, 26(4):527–541, January 2007. Pub-
lisher: Routledge eprint: https://doi.org/10.1080/02650487.2007.11073031.
[363] Leroy Robinson, Greg W. Marshall, and Miriam B. Stamps. Sales force use
of technology: antecedents to technology acceptance. Journal of Business Re-
search, 58(12):1623–1631, December 2005.
[364] Stuart M Rodgers, Christopher W Myers, Jerry Ball, and Mary D Freiman.
The Situation Model in the Synthetic Teammate Project. page 8, 2011.
[365] Sebastian S Rodriguez, Jacqueline Chen, Harsh Deep, Jaewook Jae Lee, Der-
rik E Asher, and Erin Zaroukian. Measuring complacency in humans interacting
with autonomous agents in a multi-agent system. In Artificial Intelligence and
Machine Learning for Multi-Domain Operations Applications II, volume 11413,
pages 258–271. SPIE, 2020.
[366] Larry D. Rosen, Deborah C. Sears, and Michelle M. Weil. Treating techno-
phobia: A longitudinal evaluation of the computerphobia reduction program.
Computers in Human Behavior, 9(1):27–50, March 1993.
[367] Larry D. Rosen and Michelle M. Weil. Computer availability, computer expe-
rience and technophobia among public school teachers. Computers in Human
Behavior, 11(1):9–31, March 1995.
355
[368] Avi Rosenfeld and Ariella Richardson. Explainability in human–agent systems.
Autonomous Agents and Multi-Agent Systems, 33(6):673–705, November 2019.
[369] Emilie M. Roth, Christen Sushereba, Laura G. Militello, Julie Diiulio, and Katie
Ernst. Function Allocation Considerations in the Era of Human Autonomy
Teaming. Journal of Cognitive Engineering and Decision Making, 13(4):199–
220, December 2019. Publisher: SAGE Publications.
[370] Ericka Rovira, Anne Collins McLaughlin, Richard Pak, and Luke High. Looking
for Age Differences in Self-Driving Vehicles: Examining the Effects of Automa-
tion Reliability, Driving Risk, and Physical Impairment on Trust. Frontiers in
Psychology, 10, 2019.
[371] Kai Ruggeri, Ondˇrej acha, Igor G. Menezes, Michaela Kos, Matija Franklin,
Laurie Parma, Patrick Langdon, Brian Matthews, and John Miles. In with
the new? Generational differences shape population technology adoption pat-
terns in the age of self-driving vehicles. Journal of Engineering and Technology
Management, 50:39–44, October 2018.
[372] Giulia Russo, Pedro Reche, Marzio Pennisi, and Francesco Pap-
palardo. The combination of artificial intelligence and systems biol-
ogy for intelligent vaccine design. Expert Opinion on Drug Discovery,
15(11):1267–1281, November 2020. Publisher: Taylor & Francis eprint:
https://doi.org/10.1080/17460441.2020.1791076.
[373] Maria-Lucia Rusu and Ramona Herman. The implications of propaganda as a
social influence strategy. Scientific Bulletin, 23(2):118–125, 2018.
[374] Christina odel, Susanne Stadler, Alexander Meschtscherjakov, and Manfred
Tscheligi. Towards Autonomous Cars: The Effect of Autonomy Levels on Ac-
ceptance and User Experience. In Proceedings of the 6th International Con-
ference on Automotive User Interfaces and Interactive Vehicular Applications,
AutomotiveUI ’14, pages 1–8, New York, NY, USA, September 2014. Associa-
tion for Computing Machinery.
[375] Richard Saavedra, P Christopher Earley, and Linn Van Dyne. Complex inter-
dependence in task-performing groups. Journal of applied psychology, 78(1):61,
1993.
[376] Tahir Saeed, Shazia Almas, M. Anis-ul Haq, and GSK Niazi. Leadership styles:
relationship with conflict management styles. International Journal of Con-
flict Management, 25(3):214–225, January 2014. Publisher: Emerald Group
Publishing Limited.
[377] National Highway Traffic Safety and others. Preliminary statement of policy
concerning automated vehicles. Washington, DC, 1:14, 2013.
356
[378] Lynda M. Sagrestano, Christopher L. Heavey, and Andrew Chris-
tensen. Perceived Power and Physical Violence in Marital Con-
flict. Journal of Social Issues, 55(1):65–79, 1999. eprint:
https://spssi.onlinelibrary.wiley.com/doi/pdf/10.1111/0022-4537.00105.
[379] Mari Sako. Artificial intelligence and the future of professional work. Commu-
nications of the ACM, 63(4):25–27, 2020.
[380] Bawornsak Sakulkueakulsuk, Siyada Witoon, Potiwat Ngarmkajornwiwat,
Pornpen Pataranutaporn, Werasak Surareungchai, Pat Pataranutaporn, and
Pakpoom Subsoontorn. Kids making AI: Integrating Machine Learning, Gam-
ification, and Social Context in STEM Education. In 2018 IEEE International
Conference on Teaching, Assessment, and Learning for Engineering (TALE),
pages 1005–1010, December 2018. ISSN: 2470-6698.
[381] Eduardo Salas, Nancy J. Cooke, and Michael A. Rosen. On Teams, Team-
work, and Team Performance: Discoveries and Developments. Human Factors,
50(3):540–547, June 2008. Publisher: SAGE Publications Inc.
[382] Eduardo Salas and Stephen M. Fiore, editors. Team cognition: Understanding
the factors that drive process and performance. Team cognition: Understand-
ing the factors that drive process and performance. American Psychological
Association, Washington, DC, US, 2004. Pages: xi, 268.
[383] Eduardo Salas and Stephen M. Fiore. Why team cognition? An overview. In
Team cognition: Understanding the factors that drive process and performance,
pages 3–8. American Psychological Association, Washington, DC, US, 2004.
[384] Eduardo Salas, Dana E. Sims, and C. Shawn Burke. Is there a “Big Five” in
Teamwork? Small Group Research, 36(5):555–599, October 2005. Publisher:
SAGE Publications Inc.
[385] Jerome H. Saltzer. The Origin of the “MIT License”. IEEE Annals of the
History of Computing, 42(4):94–98, 2020. Publisher: IEEE Computer Society.
[386] Swatee Sarangi and Shreya Shah. Individuals, teams and organizations score
with gamification: Tool can help to motivate employees and boost performance.
Human Resource Management International Digest, 23(4):24–27, January 2015.
Publisher: Emerald Group Publishing Limited.
[387] Kristin E. Schaefer, Edward R. Straub, Jessie Y. C. Chen, Joe Putney, and
A. W. Evans. Communicating intent to develop shared situation awareness and
engender trust in human-agent teams. Cognitive Systems Research, 46:26–39,
December 2017.
357
[388] Beau Schelble, Lorenzo-Barberis Canonico, Nathan McNeese, Jack Carroll, and
Casey Hird. Designing Human-Autonomy Teaming Experiments Through Rein-
forcement Learning. Proceedings of the Human Factors and Ergonomics Society
Annual Meeting, 64(1):1426–1430, December 2020. Publisher: SAGE Publica-
tions Inc.
[389] Beau G. Schelble, Christopher Flathmann, and Nathan McNeese. Towards
Meaningfully Integrating Human-Autonomy Teaming in Applied Settings. In
Proceedings of the 8th International Conference on Human-Agent Interaction,
HAI ’20, pages 149–156, New York, NY, USA, November 2020. Association for
Computing Machinery.
[390] Beau G Schelble, Christopher Flathmann, Nathan J McNeese, Guo Freeman,
and Rohit Mallick. Let’s think together! assessing shared mental models, per-
formance, and trust in human-agent teams. Proceedings of the ACM on Human-
Computer Interaction, 6(GROUP):1–29, 2022.
[391] Beau G. Schelble, Christopher Flathmann, Nathan J. McNeese, Guo Freeman,
and Rohit Mallick. Let’s Think Together! Assessing Shared Mental Models,
Performance, and Trust in Human-Agent Teams. Proceedings of the ACM on
Human-Computer Interaction, 6(GROUP):13:1–13:29, January 2022.
[392] Paul Schermerhorn and Matthias Scheutz. Disentangling the Effects of Robot
Affect, Embodiment, and Autonomy on Human Team Members in a Mixed-
Initiative Task. ACHI 2011 - 4th International Conference on Advances in
Computer-Human Interactions, January 2011.
[393] Da Scheufele. Framing as a theory of media effects.
Journal of Communication, 49(1):103–122, 1999. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1460-2466.1999.tb02784.x.
[394] Axel Schulte and Diana Donath. A Design and Description Method for Human-
Autonomy Teaming Systems. In Waldemar Karwowski and Tareq Ahram, ed-
itors, Intelligent Human Systems Integration, Advances in Intelligent Systems
and Computing, pages 3–9, Cham, 2018. Springer International Publishing.
[395] David Schuster, Joseph R. Keebler, Jorge Zuniga, and Florian Jentsch. Individ-
ual differences in SA measurement and performance in human-robot teaming.
In 2012 IEEE International Multi-Disciplinary Conference on Cognitive Meth-
ods in Situation Awareness and Decision Support, pages 187–190, March 2012.
ISSN: 2379-1675.
[396] Aaron Sedley and Hendrik M¨uller. Minimizing change aversion for the google
drive launch. In CHI ’13 Extended Abstracts on Human Factors in Comput-
ing Systems, CHI EA ’13, pages 2351–2354, New York, NY, USA, April 2013.
Association for Computing Machinery.
358
[397] Isabella Seeber, Eva Bittner, Robert O. Briggs, Triparna de Vreede, Gert-Jan
de Vreede, Aaron Elkins, Ronald Maier, Alexander B. Merz, Sarah Oeste-Reiß,
Nils Randrup, Gerhard Schwabe, and Matthias ollner. Machines as teammates:
A research agenda on AI in team collaboration. Information & Management,
57(2):103174, March 2020.
[398] Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubrama-
nian, and Janet Vertesi. Fairness and Abstraction in Sociotechnical Systems. In
Proceedings of the Conference on Fairness, Accountability, and Transparency,
FAT* ’19, pages 59–68, New York, NY, USA, January 2019. Association for
Computing Machinery.
[399] Pedro Sequeira, Patr´ıcia Alves-Oliveira, Tiago Ribeiro, Eugenio Di Tullio, Sofia
Petisca, Francisco S. Melo, Ginevra Castellano, and Ana Paiva. Discovering
social interaction strategies for robots from restricted-perception Wizard-of-Oz
studies. In 2016 11th ACM/IEEE International Conference on Human-Robot
Interaction (HRI), pages 197–204, March 2016. ISSN: 2167-2148.
[400] Henrietta Sherwin, Kiron Chatterjee, and Juliet Jain. An exploration of the
importance of social influence in the decision to start bicycling in england.
Transportation Research Part A: Policy and Practice, 68:32–45, 2014.
[401] Ben Shneiderman. Human-centered artificial intelligence: Reliable, safe & trust-
worthy. International Journal of Human–Computer Interaction, 36(6):495–504,
2020.
[402] Ben Shneiderman. Human-centered artificial intelligence: three fresh ideas. AIS
Transactions on Human-Computer Interaction, 12(3):109–124, 2020.
[403] Ben Shneiderman. Human-centered ai. Issues in Science and Technology,
37(2):56–61, 2021.
[404] Jake Silberg and James Manyika. Notes from the AI frontier: Tackling bias in
AI (and in humans). McKinsey Global Institute, pages 1–6, 2019.
[405] A. Sivunen and M. Valo. Team leaders’ technology choice in virtual teams.
IEEE Transactions on Professional Communication, 49(1):57–68, March 2006.
Conference Name: IEEE Transactions on Professional Communication.
[406] Aaron Smith and Janna Anderson. AI, Robotics, and the Future of Jobs. Pew
Research Center, 6:51, 2014.
[407] Ashley M. Smith and Mark Green. Artificial Intelligence and the Role
of Leadership. Journal of Leadership Studies, 12(3):85–87, 2018. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1002/jls.21605.
359
[408] Dominik Sobania, Martin Briesch, and Franz Rothlauf. Choose your program-
ming copilot: a comparison of the program synthesis performance of github
copilot and genetic programming. In Proceedings of the Genetic and Evolution-
ary Computation Conference, GECCO ’22, pages 1019–1027, New York, NY,
USA, July 2022. Association for Computing Machinery.
[409] Victor Sohmen. Leadership and teamwork: Two sides of the same coin. Journal
of IT and Economic Development, 4:1–18, January 2013.
[410] Jae Ho Sohn, Yeshwant Reddy Chillakuru, Stanley Lee, Amie Y. Lee, Tatiana
Kelil, Christopher Paul Hess, Youngho Seo, Thienkhai Vu, and Bonnie N. Joe.
An Open-Source, Vender Agnostic Hardware and Software Pipeline for Integra-
tion of Artificial Intelligence in Radiology Workflow. Journal of Digital Imaging,
33(4):1041–1046, August 2020.
[411] Kwonsang Sohn and Ohbyung Kwon. Technology acceptance theories and fac-
tors influencing artificial Intelligence-based intelligent products. Telematics and
Informatics, 47:101324, April 2020.
[412] Chonggang Song, Wynne Hsu, and Mong Li Lee. Targeted influence maxi-
mization in social networks. In Proceedings of the 25th ACM International
on Conference on Information and Knowledge Management, pages 1683–1692,
2016.
[413] Erich Sorantin, Michael G. Grasser, Ariane Hemmelmayr, Sebastian Tschauner,
Franko Hrzic, Veronika Weiss, Jana Lacekova, and Andreas Holzinger. The aug-
mented radiologist: artificial intelligence in the practice of radiology. Pediatric
Radiology, October 2021.
[414] Glenn G. Sparks. Media Effects Research: A Basic Overview. Cengage Learning,
January 2015. Google-Books-ID: vRSdBQAAQBAJ.
[415] Pieter Spronck, Marc Ponsen, Ida Sprinkhuizen-Kuyper, and Eric Postma.
Adaptive game AI with dynamic scripting. Machine Learning, 63(3):217–248,
June 2006.
[416] Mark Srite and Elena Karahanna. The Role of Espoused National Cultural Val-
ues in Technology Acceptance. MIS Quarterly, 30(3):679–704, 2006. Publisher:
Management Information Systems Research Center, University of Minnesota.
[417] Pallavi Srivastava and Shilpi Jain. A leadership framework for distributed self-
organized scrum teams. Team Performance Management: An International
Journal, 23(5/6):293–314, January 2017. Publisher: Emerald Publishing Lim-
ited.
360
[418] Luke Stark. Facial recognition is the plutonium of AI. XRDS: Crossroads, The
ACM Magazine for Students, 25(3):50–55, April 2019.
[419] John Stasko, Albert Badre, and Clayton Lewis. Do algorithm animations assist
learning? an empirical study and analysis. In Proceedings of the INTERACT
’93 and CHI ’93 Conference on Human Factors in Computing Systems, CHI
’93, pages 61–66, New York, NY, USA, May 1993. Association for Computing
Machinery.
[420] John T. Stasko. Supporting student-built algorithm animation as a pedagogical
tool. In CHI ’97 Extended Abstracts on Human Factors in Computing Systems,
CHI EA ’97, pages 24–25, New York, NY, USA, March 1997. Association for
Computing Machinery.
[421] Konstantinos Stathoulopoulos and Juan C. Mateos-Garcia. Gender Diversity
in AI Research. SSRN Scholarly Paper ID 3428240, Social Science Research
Network, Rochester, NY, July 2019.
[422] Sebasti´an Steizel and Eva Rimbau-Gilabert. Upward influence tactics through
technology-mediated communication tools. Computers in Human Behavior,
29(2):462–472, March 2013.
[423] Cynthia Kay Stevens and Amy L. Kristof. Making the right impression: A
field study of applicant impression management during job interviews. Journal
of Applied Psychology, 80(5):587–606, 1995. Place: US Publisher: American
Psychological Association.
[424] Kil Soo Suh. Impact of communication medium on task performance and satis-
faction: an examination of media-richness theory. Information & Management,
35(5):295–312, May 1999.
[425] Sharifa Sultana, Shaid Hasan, Khandaker Reaz Mahmud, S. M. Raihanul Alam,
and Syed Ishtiaque Ahmed. ’Shada Baksho’ : a hardware device to explore the
fears of using mobile phones among the rural women of Bangladesh. In Proceed-
ings of the Tenth International Conference on Information and Communication
Technologies and Development, ICTD ’19, pages 1–4, New York, NY, USA, Jan-
uary 2019. Association for Computing Machinery.
[426] Pei-Chen Sun and Hsing Kenny Cheng. The design of instructional multime-
dia in e-Learning: A Media Richness Theory-based approach. Computers &
Education, 49(3):662–676, November 2007.
[427] Dhiraj Sunehra, B. Jhansi, and R. Sneha. Smart Robotic Personal Assistant
Vehicle Using Raspberry Pi and Zero UI Technology. In 2021 6th International
Conference for Convergence in Technology (I2CT), pages 1–6, April 2021.
361
[428] Priyanka Surendran. Technology Acceptance Model: A Survey of Literature. In-
ternational Journal of Business and Social Research, 2(4):175–178, 2012. Pub-
lisher: MIR Center for Socio-Economic Research.
[429] Teo Susnjak. Chatgpt: The end of online exam integrity? arXiv preprint
arXiv:2212.09292, 2022.
[430] Zachari Swiecki. Measuring the impact of interdependence on individuals during
collaborative problem-solving. Journal of Learning Analytics, 8(1):75–94, 2021.
[431] Bernadette Szajna. Empirical Evaluation of the Revised Technology Accep-
tance Model. Management Science, 42(1):85–92, January 1996. Publisher:
INFORMS.
[432] Michael Szollosy. Freud, frankenstein and our fear of robots: projection in our
cultural perception of technology. Ai & Society, 32(3):433–439, 2017.
[433] Simon Taggar and Robert Ellis. The role of leaders in shaping formal team
norms. The Leadership Quarterly, 18(2):105–120, April 2007.
[434] Xu Tan and Xiaobing Li. A Tutorial on AI Music Composition. In Proceedings
of the 29th ACM International Conference on Multimedia, pages 5678–5680.
Association for Computing Machinery, New York, NY, USA, October 2021.
[435] Myriam Tanguay-Sela, David Benrimoh, Christina Popescu, Tamara Perez,
Colleen Rollins, Emily Snook, Eryn Lundrigan, Caitrin Armstrong, Kelly
Perlman, Robert Fratila, Joseph Mehltretter, Sonia Israel, Monique Cham-
pagne, J´erˆome Williams, Jade Simard, Sagar V. Parikh, Jordan F. Karp,
Katherine Heller, Outi Linnaranta, Liliana Gomez Cardona, Gustavo Turecki,
and Howard C. Margolese. Evaluating the perceived utility of an artificial
intelligence-powered clinical decision support system for depression treatment
using a simulation center. Psychiatry Research, 308:114336, February 2022.
[436] Sunil Thomas, Ann Abraham, Jeremy Baldwin, Sakshi Piplani, and Nikolai
Petrovsky. Artificial Intelligence in Vaccine and Drug Design. In Sunil Thomas,
editor, Vaccine Design: Methods and Protocols, Volume 1. Vaccines for Human
Diseases, Methods in Molecular Biology, pages 131–146. Springer US, New
York, NY, 2022.
[437] H Holden Thorp. Chatgpt is fun, but not an author, 2023.
[438] Jim Torresen. A Review of Future and Ethical Perspectives of Robotics and
AI. Frontiers in Robotics and AI, 4, 2018.
362
[439] David Traum, Jeff Rickel, Jonathan Gratch, and Stacy Marsella. Negotiation
over tasks in hybrid human-agent teams for simulation-based training. In Pro-
ceedings of the second international joint conference on Autonomous agents and
multiagent systems, AAMAS ’03, pages 441–448, New York, NY, USA, July
2003. Association for Computing Machinery.
[440] Chi-Hsing Tseng and Li-Fun Wei. The efficiency of mobile media richness across
different stages of online consumer behavior. International Journal of Informa-
tion Management, 50:353–364, February 2020.
[441] John C. Turner. Social influence. Social influence. Thomson Brooks/Cole Pub-
lishing Co, Belmont, CA, US, 1991. Pages: xvi, 206.
[442] Marie E. Vachovsky, Grace Wu, Sorathan Chaturapruek, Olga Russakovsky,
Richard Sommer, and Li Fei-Fei. Toward More Gender Diversity in CS through
an Artificial Intelligence Summer Program for High School Girls. In Proceed-
ings of the 47th ACM Technical Symposium on Computing Science Education,
SIGCSE ’16, pages 303–308, New York, NY, USA, February 2016. Association
for Computing Machinery.
[443] Philip van Allen. Prototyping ways of prototyping AI. Interactions, 25(6):46–
51, October 2018.
[444] Andrew H Van de Ven, Andre L Delbecq, and Richard Koenig Jr. Determinants
of coordination modes within organizations. American sociological review, pages
322–338, 1976.
[445] Karel van den Bosch and Adelbert Bronkhorst. Human-AI cooperation to ben-
efit military decision making. NATO, 2018.
[446] Joris van den Oever. The performance Impact of Communication Failure in
BlocksWorld for Teams. 2020.
[447] Jinke D Van Der Laan, Adriaan Heino, and Dick De Waard. A simple procedure
for the assessment of acceptance of advanced transport telematics. Transporta-
tion Research Part C: Emerging Technologies, 5(1):1–10, 1997.
[448] Jasper van der Waa, Sabine Verdult, Karel van den Bosch, Jurriaan van Digge-
len, Tjalling Haije, Birgit van der Stigchel, and Ioana Cocu. Moral Decision
Making in Human-Agent Teams: Human Control and the Role of Explanations.
Frontiers in Robotics and AI, 8:640647, May 2021.
[449] Piyush Vashistha, Juginder Pal Singh, Pranav Jain, and Jitendra Kumar. Rasp-
berry Pi based voice-operated personal assistant (Neobot). In 2019 3rd Inter-
national conference on Electronics, Communication and Aerospace Technology
(ICECA), pages 974–978, June 2019.
363
[450] Viswanath Venkatesh. Determinants of Perceived Ease of Use: Integrating Con-
trol, Intrinsic Motivation, and Emotion into the Technology Acceptance Model.
Information Systems Research, 11(4):342–365, December 2000. Publisher: IN-
FORMS.
[451] Viswanath Venkatesh and Hillol Bala. Technology acceptance model 3 and a
research agenda on interventions. Decision sciences, 39(2):273–315, 2008.
[452] Viswanath Venkatesh and Hillol Bala. Technology Acceptance Model 3
and a Research Agenda on Interventions. Decision Sciences, 39(2):273–
315, 2008. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1540-
5915.2008.00192.x.
[453] Viswanath Venkatesh and Fred D. Davis. A Theoretical Extension of the Tech-
nology Acceptance Model: Four Longitudinal Field Studies. Management Sci-
ence, 46(2):186–204, February 2000. Publisher: INFORMS.
[454] Kailas Vodrahalli, Roxana Daneshjou, Tobias Gerstenberg, and James Zou.
Do humans trust advice more if it comes from ai? an analysis of human-ai
interactions. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics,
and Society, pages 763–777, 2022.
[455] Ruth Waitzberg, Nora Gottlieb, Wilm Quentin, Reinhard Busse, and Dan
Greenberg. Dual agency in hospitals: What strategies do managers and physi-
cians apply to reconcile dilemmas between clinical and economic considerations?
10.14279/depositonce-12507, 2021.
[456] April Yi Wang, Dakuo Wang, Jaimie Drozdal, Michael Muller, Soya Park,
Justin D. Weisz, Xuye Liu, Lingfei Wu, and Casey Dugan. Documentation
Matters: Human-Centered AI System to Assist Data Science Code Documen-
tation in Computational Notebooks. ACM Transactions on Computer-Human
Interaction, 29(2):17:1–17:33, January 2022.
[457] Bin Wang, Yukun Liu, Jing Qian, and Sharon K. Parker. Achieving Effective
Remote Working During the COVID-19 Pandemic: A Work Design Perspective.
Applied Psychology, 70(1):16–59, January 2021. Publisher: John Wiley & Sons,
Ltd.
[458] Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneider-
man, Yuanchun Shi, and Qianying Wang. From Human-Human Collaboration
to Human-AI Collaboration: Designing AI Systems That Can Work Together
with People. In Extended Abstracts of the 2020 CHI Conference on Human
Factors in Computing Systems, CHI EA ’20, pages 1–6, New York, NY, USA,
April 2020. Association for Computing Machinery.
364
[459] Weiyu Wang and Keng Siau. Artificial intelligence, machine learning, automa-
tion, robotics, future of work and future of humanity: A review and research
agenda. Journal of Database Management (JDM), 30(1):61–79, 2019.
[460] Justin D. Weisz, Michael Muller, Stephanie Houde, John Richards, Steven I.
Ross, Fernando Martinez, Mayank Agarwal, and Kartik Talamadupula. Per-
fection Not Required? Human-AI Partnerships in Code Translation. In 26th
International Conference on Intelligent User Interfaces, IUI ’21, pages 402–412,
New York, NY, USA, April 2021. Association for Computing Machinery.
[461] Andre M Weitzenhoffer and Ernest R Hilgard. Stanford hypnotic susceptibility
scale, form C, volume 27. Palo Alto, CA: Consulting Psychologists Press, 1962.
[462] Emma J. Williams, Amy Beardmore, and Adam N. Joinson. Individual differ-
ences in susceptibility to online influence: A theoretical review. Computers in
Human Behavior, 72:412–421, July 2017.
[463] David X. H. Wo, Marshall Schminke, and Maureen L. Ambrose. Trickle-Down,
Trickle-Out, Trickle-Up, Trickle-In, and Trickle-Around Effects: An Integrative
Perspective on Indirect Social Influence Phenomena. Journal of Management,
45(6):2263–2292, July 2019. Publisher: SAGE Publications Inc.
[464] Marcel Woide, Dina Stiegemeier, Stefan Pfattheicher, and Martin Baumann.
Measuring driver-vehicle cooperation: development and validation of the
human-machine-interaction-interdependence questionnaire (hmii). Transporta-
tion research part F: traffic psychology and behaviour, 83:424–439, 2021.
[465] M. J. Wolf, K. W. Miller, and F. S. Grodzinsky. Why We Should Have Seen That
Coming: Comments on Microsoft’s Tay “Experiment,” and Wider Implications.
The ORBIT Journal, 1(2):1–12, January 2017.
[466] Wendy Wood. Attitude Change: Persuasion and Social Influ-
ence. Annual Review of Psychology, 51(1):539–570, 2000. eprint:
https://doi.org/10.1146/annurev.psych.51.1.539.
[467] Samuel C Woolley and Philip Howard. Computational propaganda worldwide:
Executive summary. 2017.
[468] M. Workman. The effects from technology-mediated interaction and open-
ness in virtual team performance measures. Behaviour & Information Tech-
nology, 26(5):355–365, September 2007. Publisher: Taylor & Francis eprint:
https://doi.org/10.1080/01449290500402809.
[469] Julia L. Wright, Stephanie A. Quinn, Jessie Y.C. Chen, and Michael J. Barnes.
Individual Differences in Human-Agent Teaming: An Analysis of Workload
365
and Situation Awareness through Eye Movements. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting, 58(1):1410–1414, September
2014. Publisher: SAGE Publications Inc.
[470] Wei Xu. Toward human-centered AI: a perspective from human-computer in-
teraction. Interactions, 26(4):42–46, June 2019.
[471] Wei Xu, Liezhong Ge, and Zaifeng Gao. Human-AI interaction: An emerging
interdisciplinary domain for enabling human-centered AI. arXiv:2112.01920
[cs], October 2021. arXiv: 2112.01920.
[472] Heetae Yang and Hwansoo Lee. Understanding user behavior of virtual personal
assistant devices. Information Systems and e-Business Management, 17(1):65–
87, March 2019.
[473] Adrienne Yapo and Joseph Weiss. Ethical Implications of Bias in Machine
Learning. Hawaii International Conference on System Sciences 2018 (HICSS-
51), January 2018.
[474] Yuandong Yi, Zhan Wu, and Lai Lai Tung. How Individual Dif-
ferences Influence Technology Usage Behavior? Toward an In-
tegrated Framework. Journal of Computer Information Systems,
46(2):52–63, December 2005. Publisher: Taylor & Francis eprint:
https://www.tandfonline.com/doi/pdf/10.1080/08874417.2006.11645883.
[475] Tom E. Yoon, Biswadip Ghosh, and Bong-Keun Jeong. User Acceptance of
Business Intelligence (BI) Application: Technology, Individual Difference, So-
cial Influence, and Situational Constraints. In 2014 47th Hawaii International
Conference on System Sciences, pages 3758–3766, January 2014. ISSN: 1530-
1605.
[476] Zornitsa Yordanova. Gamification as a Tool for Supporting Artificial In-
telligence Development State of Art. In Miguel Botto-Tobar, Marcelo
Zambrano Vizuete, Pablo Torres-Carri´on, Sergio Montes Le´on, Guillermo
Pizarro V´asquez, and Benjamin Durakovic, editors, Applied Technologies, Com-
munications in Computer and Information Science, pages 313–324, Cham, 2020.
Springer International Publishing.
[477] Yu Yuan, Janet Fulk, Michelle Shumate, Peter R. Monge, J. Alison Bryant, and
Matthew Matsaganis. Individual Participation in Organizational Information
Commons. Human Communication Research, 31(2):212–240, 2005. eprint:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1468-2958.2005.tb00870.x.
[478] Zahra Zahedi and Subbarao Kambhampati. Human-AI Symbiosis: A Survey of
Current Approaches, March 2021. arXiv:2103.09990 [cs].
366
[479] Mike Zajko. Conservative AI and social inequality: conceptualizing alternatives
to bias through social theory. AI & SOCIETY, 36(3):1047–1056, September
2021.
[480] Rui Zhang, Nathan J. McNeese, Guo Freeman, and Geoff Musick. ”An Ideal
Human”: Expectations of AI Teammates in Human-AI Teaming. Proceedings of
the ACM on Human-Computer Interaction, 4(CSCW3):246:1–246:25, January
2021.
[481] Fangyun Zhao, Curt Henrichs, and Bilge Mutlu. Task Interdependence in
Human-Robot Teaming. In 2020 29th IEEE International Conference on Robot
and Human Interactive Communication (RO-MAN), pages 1143–1149, August
2020. ISSN: 1944-9437.
[482] Tao Zhou. Understanding online community user participation: a social in-
fluence perspective. Internet Research, 21(1):67–81, January 2011. Publisher:
Emerald Group Publishing Limited.
[483] Xiao-Yun Zhou, Yao Guo, Mali Shen, and Guang-Zhong Yang. Artificial In-
telligence in Surgery. arXiv:2001.00627 [physics], December 2019. arXiv:
2001.00627.
[484] Mengxiao Zhu, Yun Huang, and Noshir S. Contractor. Motivations for self-
assembling into project teams. Social Networks, 35(2):251–264, May 2013.
[485] James Zou and Londa Schiebinger. AI can be sexist and racist it’s time to
make it fair. Nature, 559(7714):324–326, July 2018. Bandiera abtest: a Cg type:
Comment Number: 7714 Publisher: Nature Publishing Group Subject term:
Information technology, Society.
367