Friday, December 18, 2015

Developing a Social Interaction Mechanic - Part II

Goals

This next post will focus on the goals of the mechanic. Since we're building this mechanic for a very specific purpose (representing non-guaranteed informational exchanges in D&D), we need to keep a few aims of the system in mind:
  1. Strategy. The system should provide a degree of choice, allowing players to make meaningful decisions based on the abilities of their character and the situation at hand.
  2. Meaningful outcomes. Completing an EM should have a meaningful outcome which can change a given situation to a previously-inaccessible one. This also implies that there should be some incentive to completing an EM.
  3. Simplicity. When I say this, I'm being relative. The system has to be balanced: it should account for the various nuances of social interaction but remain simple enough that it takes only a few actual seconds to determine an outcome. Note that, since D&D combats can offer take over half an hour to complete, the total running time of the interaction can stretch out, but individual steps should be easy to complete.
  4. Progression. Failing an EM must never deadlock the game, but instead progress to a new state. Similarly, progression must be ensured within the sequence: even if we take half an hour to complete a full interaction (which may happen if say, a legal trial is taking place or a person is under interrogation), each step should reveal some new information.
If we do not follow these aims, the system runs the risk of being boring, over-complicated or having no effect.

Features

Now let's consider some of the features of human communication.

  1. Information is transmitted, usually imperfectly. All instances of communication involve information transfer, as touched on in the last post. There is a sender and a receiver of meaning, The sender relays some information to the receiver, who then interprets the information. There are several ways in which meaning can be lost: the Internet is particularly prone to this. Wiio's laws comment on these problems, remarking on the inherent differences in language, culture, personal beliefs and the weaknesses of the transfer system all as ways in which meaning is distorted or destroyed.
  2. The meaning of a message depends on prosodic features, nonverbal communication and word choice. Messages, even ones using the same words, can have very different meanings depending on how the message is produced and the context of its production. Sarcasm and irony are great examples of this: just consider how many different meanings can be obtained from the phrase "You look terrible" (some of them: mockery, pity, concern, even admiration...) Furthermore, message meanings can depend on the relationship of the sender and the receiver: the above phrase is interpreted very differently when a stranger says it compared to, say, a parent.
  3. Interpretation of a message's features depends on the receiver's prior knowledge. This idea builds on 1 and 2, as ultimately it concerns how the features of meaning are misinterpreted. Jargon provides a good example of how the features of a message require certain prior knowledge of the receiver, but this can apply to many other features of communication: if the receiver does not know that a particular inflection suggests sarcasm, or that a certain gesture symbolizes approval, or even if they don't speak the language, they are incapable of fully comprehending the message.
So based on these factors, we can see some goals which overlap with features. Progression can be ensured by the continual transmission of information: each successive "pass" informs the receiver further, allowing new options to open up, thereby satisfying the need for strategy, as new options require deeper choices. Furthermore, message meanings depend on features which provide different nuances, another level of strategy which can be categorized for simplicity. And, of course, enough information can lead to a final interpretation and action, providing a meaningful outcome. So now we can experiment with how to actually implement this system!

Designing the Mechanic

Based on the needs and features we've outlined, I'm inclined to rely on a connected, acyclic directed graph to represent the systematic progression from each state of information-sharing to another, until a given "goal state" is reached. This doesn't require any actual implementation yet; we'll get to that in a future post. For now, we can construct a graph that obeys the following rules:
  • Each vertex represents a state of information
  • Each edge represents the addition of new information in a given form. While there are multiple ways to add new information, they will result in different states (e.g. the receiver may trust the information differently, or misinterpret it...) depending on how addition takes place (the given form). Since information is always added (we can't take away information), edges are directed and acyclic (can't get back to an earlier state).
  • Vertices with an outdegree of 0 are "goal states."
The fact that we consider the graph to be acyclic means that every state - even those in which communication fails - has a small effect on the progress towards a meaningful outcome, which we need to be able to quantify. This becomes rather difficult as the number of states increases: if there are a thousand different states (easily doable with a routine outdegree of 10, which would require only 4 "rounds" of exchange, since 10^(4 - 1) = 1000), we would need more and more information to store changes.

This of course assumes we always have around 10 choices. If we limit the number of outdegree of the vertices, however, we can decrease the number of vertices we need to keep track of significantly. Furthermore, we can limit the number of vertices by adding prerequisites to certain edges. Certain choices only open up when we're at a certain vertex. Without jumping too deeply into implementation, if we have a vertex representing a particular state of information, the specific details of that information may naturally lead to other information being uncovered (if directions to an address were being provided, we'd have to know how to get to point B from point A before trying to understand getting from point B to point C).

Furthermore, we can categorize states of information based on multiple factors. Adding information can have multiple effects, depending on the success of the addition. Failure will naturally tend towards certain responses (frustration, anger, confusion...) that will not occur during success, while the way in which information is added will naturally provoke certain new emotional states between the sender and the receiver.

Conclusion

This has been a bit theoretical, but it should provide some good groundwork before diving into methods for implementing the mechanic. Let's review where we stand. The mechanic should allow for clear, comprehensive and meaningful player choices. These choices should allow progression to a clear goal without the players getting lost or confused. Additionally, the mechanic should represent the imperfect exchange of information inherent to conversation, which eventually leads to a decision point at which the parties decide how to act based on this information. Given this, we can use a tree graph to "store" the information: each vertex storing multiple attributes of the current state of information and a list of possible choices to move to another state, while aiming to keep the number of states low enough that the system is easy to use.

In the next post, I'll start on an implementation of these ideas.

Tuesday, December 15, 2015

Developing a Social Interaction Mechanic - Part I

Introduction

As of now, whenever social interaction comes up in my D&D games, it's a clusterfuck. There's a bit of actual acting roleplay, which is a bit clunky (mostly for me since all my off-the-cuff characters are essentially the same), and a bit of narration (I explain as DM what the NPCs say or do or want), which is hard to make interesting. The whole situation is disorganized and stressful: it's totally up to me how much information I do or don't share with the players, and usually I don't even know myself (since I generally just write down a few characters and their motives before starting an adventure).

I don't want to do that anymore when there is an alternative: come up with a social interaction mechanic.

Many DMs have tried different ways of doing this. When I started playing D&D with 4e, we used skill challenges from time to time, which were really railroady. Other times, a simple one-off skill check (roll Diplomacy, roll Bluff, roll Intimidate...) was all that we did.
Other worthy efforts I have seen have been a card system and a modified diplomacy check. I'm sure there's a lot more out there, but I also have a good feeling it will come across the biggest problem with representing social interaction in a RPG:

Social interaction is made up of a vast and complex matrix of relationships, emotions, beliefs and physical actions which are obvious to anyone who is capable of social interaction.

Most D&D players are not very familiar with creating fireballs out of pure energy or headbutting a hobgoblin off a bridge, but if they're in your D&D group they are capable of social interaction (the skill at which varies between people of course). So most of us are aware of patterns of social interaction - things to expect or do. We know that, for instance, a hungry person might be more irritable, or a grieving person may not be in the mood for levity, or that a superior may expect a certain degree of etiquette from a subordinate, and we modify the way in which we interact with these people appropriately.

And most of the time in D&D, these expectations are met: the baron is upset by the rudeness of the players; the lost child is grateful to be found; the pirates are excited by the prospect of riches. But occasionally there are situations where it is difficult to know how best to proceed.

Say the players are investigating a close NPC friend's murder in a small frontier village. The villagers can be assumed to be generally suspicious of the players: in a dangerous region like this, outsiders could mean trouble. Authorities are less accommodating and will harass foreigners. However, certain qualities are valued highly among the villagers: strength, resourcefulness and honour are all of importance. The social structure is one of clans, divided along family rather than professional lines. Keeping the peace among villagers is more important than fair justice, as each clan cares for its own and is indifferent to the plights of the others.

These are all beliefs held by the members of the village about how their world works. It means they will not be coerced or threatened into disclosing information, and any outsider must prove their worth if they want to be treated warmly. If the players march into town, traipse into the bar and say, "Barkeep, what can you tell us about the murder of William the Wanderer?" the barkeep will politely say "never heard of him" and ignore these arrogant strangers. Anyone else questioned will likely say the same thing, and if the players continue nosing around they will be escorted out of town.

If, however, a single player comes up to the bar and, noticing a prominent hunting trophy on the inn wall, asks the barkeep to tell the story of how the stag was killed, the player can use the barkeep's beliefs and emotions to gain an "in" to the village's culture. The player may respond appropriately to the barkeep's story, speaking at the right moments and encouraging the barkeep to continue, in order to lead the barkeep to believe that the player is not a threat.

In the first situation, a group of strangers barge in looking to enforce justice in a way they see fit. In the second, a single instigator persuades an insider to trust them. Both situations are based on a fundamental exchange of information: in the first, the players want to know what happened to William the Wanderer. In the second, the player wants to the barkeep to know I am your friend. The first situation is an interrogation; the second, a manipulation. These are both tactics used to draw information out of the barkeep, but they work in very different ways and their effects are very different depending on the receiver. If, rather than the village barkeep, the players tried to interrogate a defenseless craven, they would presumably have more success.

Thus, the fundamental goal of a social interaction mechanic must be the exchange of information or meaning. Since conversations are almost always for the purpose of exchanging information (knowing how someone is doing, what the weather is like, how to reach a destination...) we can devise a D&D model based on a system where one party tries to encourage another to give information in exchange for other information, essentially bartering what they know - or claim to know - with each other.

For the next few posts, I am going to try to develop this topic with what I call EMs, short for Exchanges of Meaning. Admittedly, there is a whole bunch of research in the topic of human communication which I have yet to embark fully upon, but I intend to move through it as I go and develop the mechanic with it.