Trust in Large Language Model chatbots depends not only on what these systems do but also on how their behavior is governed and communicated. We present Trust Mediator, a workbench that supports service owners in authoring and assessing principle sets for LLM-driven chatbots through persona-based exploration and structured scaffolds. To examine this workflow, we use three analytic lenses—specificity, coverage, and coherence—to characterize the principles produced. In an exploratory between-subjects study, we compared manual and assisted principle authoring. Participants in both conditions viewed principles as useful for governing and assessing chatbot behavior. Assisted authoring was generally perceived as more supportive and tended to broaden coverage. Manual authoring required more effort but yielded principles that were significantly more specific.These findings highlight complementary strengths of assisted and manual pathways and illustrate the value of treating principle sets as design objects within governance workflows. Beyond their analytic role in this study, the lenses themselves suggest opportunities for supporting the construction and inspection of principle sets.
ACM CHI Conference on Human Factors in Computing Systems