Friday, December 18, 2015

Developing a Social Interaction Mechanic - Part II

Goals

This next post will focus on the goals of the mechanic. Since we're building this mechanic for a very specific purpose (representing non-guaranteed informational exchanges in D&D), we need to keep a few aims of the system in mind:
  1. Strategy. The system should provide a degree of choice, allowing players to make meaningful decisions based on the abilities of their character and the situation at hand.
  2. Meaningful outcomes. Completing an EM should have a meaningful outcome which can change a given situation to a previously-inaccessible one. This also implies that there should be some incentive to completing an EM.
  3. Simplicity. When I say this, I'm being relative. The system has to be balanced: it should account for the various nuances of social interaction but remain simple enough that it takes only a few actual seconds to determine an outcome. Note that, since D&D combats can offer take over half an hour to complete, the total running time of the interaction can stretch out, but individual steps should be easy to complete.
  4. Progression. Failing an EM must never deadlock the game, but instead progress to a new state. Similarly, progression must be ensured within the sequence: even if we take half an hour to complete a full interaction (which may happen if say, a legal trial is taking place or a person is under interrogation), each step should reveal some new information.
If we do not follow these aims, the system runs the risk of being boring, over-complicated or having no effect.

Features

Now let's consider some of the features of human communication.

  1. Information is transmitted, usually imperfectly. All instances of communication involve information transfer, as touched on in the last post. There is a sender and a receiver of meaning, The sender relays some information to the receiver, who then interprets the information. There are several ways in which meaning can be lost: the Internet is particularly prone to this. Wiio's laws comment on these problems, remarking on the inherent differences in language, culture, personal beliefs and the weaknesses of the transfer system all as ways in which meaning is distorted or destroyed.
  2. The meaning of a message depends on prosodic features, nonverbal communication and word choice. Messages, even ones using the same words, can have very different meanings depending on how the message is produced and the context of its production. Sarcasm and irony are great examples of this: just consider how many different meanings can be obtained from the phrase "You look terrible" (some of them: mockery, pity, concern, even admiration...) Furthermore, message meanings can depend on the relationship of the sender and the receiver: the above phrase is interpreted very differently when a stranger says it compared to, say, a parent.
  3. Interpretation of a message's features depends on the receiver's prior knowledge. This idea builds on 1 and 2, as ultimately it concerns how the features of meaning are misinterpreted. Jargon provides a good example of how the features of a message require certain prior knowledge of the receiver, but this can apply to many other features of communication: if the receiver does not know that a particular inflection suggests sarcasm, or that a certain gesture symbolizes approval, or even if they don't speak the language, they are incapable of fully comprehending the message.
So based on these factors, we can see some goals which overlap with features. Progression can be ensured by the continual transmission of information: each successive "pass" informs the receiver further, allowing new options to open up, thereby satisfying the need for strategy, as new options require deeper choices. Furthermore, message meanings depend on features which provide different nuances, another level of strategy which can be categorized for simplicity. And, of course, enough information can lead to a final interpretation and action, providing a meaningful outcome. So now we can experiment with how to actually implement this system!

Designing the Mechanic

Based on the needs and features we've outlined, I'm inclined to rely on a connected, acyclic directed graph to represent the systematic progression from each state of information-sharing to another, until a given "goal state" is reached. This doesn't require any actual implementation yet; we'll get to that in a future post. For now, we can construct a graph that obeys the following rules:
  • Each vertex represents a state of information
  • Each edge represents the addition of new information in a given form. While there are multiple ways to add new information, they will result in different states (e.g. the receiver may trust the information differently, or misinterpret it...) depending on how addition takes place (the given form). Since information is always added (we can't take away information), edges are directed and acyclic (can't get back to an earlier state).
  • Vertices with an outdegree of 0 are "goal states."
The fact that we consider the graph to be acyclic means that every state - even those in which communication fails - has a small effect on the progress towards a meaningful outcome, which we need to be able to quantify. This becomes rather difficult as the number of states increases: if there are a thousand different states (easily doable with a routine outdegree of 10, which would require only 4 "rounds" of exchange, since 10^(4 - 1) = 1000), we would need more and more information to store changes.

This of course assumes we always have around 10 choices. If we limit the number of outdegree of the vertices, however, we can decrease the number of vertices we need to keep track of significantly. Furthermore, we can limit the number of vertices by adding prerequisites to certain edges. Certain choices only open up when we're at a certain vertex. Without jumping too deeply into implementation, if we have a vertex representing a particular state of information, the specific details of that information may naturally lead to other information being uncovered (if directions to an address were being provided, we'd have to know how to get to point B from point A before trying to understand getting from point B to point C).

Furthermore, we can categorize states of information based on multiple factors. Adding information can have multiple effects, depending on the success of the addition. Failure will naturally tend towards certain responses (frustration, anger, confusion...) that will not occur during success, while the way in which information is added will naturally provoke certain new emotional states between the sender and the receiver.

Conclusion

This has been a bit theoretical, but it should provide some good groundwork before diving into methods for implementing the mechanic. Let's review where we stand. The mechanic should allow for clear, comprehensive and meaningful player choices. These choices should allow progression to a clear goal without the players getting lost or confused. Additionally, the mechanic should represent the imperfect exchange of information inherent to conversation, which eventually leads to a decision point at which the parties decide how to act based on this information. Given this, we can use a tree graph to "store" the information: each vertex storing multiple attributes of the current state of information and a list of possible choices to move to another state, while aiming to keep the number of states low enough that the system is easy to use.

In the next post, I'll start on an implementation of these ideas.

No comments:

Post a Comment