See also: Implementing Agents
An agent is a kind of Thing which exhibits agency. It is influenced by external forces or inputs, but is not determined by them entirely. That is to say, they are not entirely under control. Agents possess a breadth and depth of behaviour, more clearly exhibiting changing states.
Compared to the other described models of interaction, the Agent exhibits the strongest sense of ‘deciding’ how, when, or if to respond. It might just as well refuse or ‘push back’. It might attempt to initiate interaction: acting as a siren or seducer, drawing in potential interlocutors.
Agents might express needs, desires and moods. They might pursue (or enforce) agendas. Although these are obviously associated with people and higher animals, agents aren’t necessarily literal living creatures. For example, a domestic automatic vacuum could (poetically) said to be ‘satisfied’ when it has exhaustively cleaned the floors, to ‘hunger’ for electricity (and thus return to its base station as needed), to be ‘courteous’ in avoiding collisions and damage. We can roughly configure the cleaner, but for the most part it has enough ‘agency’ to go about its business, being only indirectly influenced by us.
The design of an Agent is useful because:
How is it different than Things?
The goals when making Agents are:
Risks when designing Agents are:
Examples of Agents might be in simple critters through to humans. In contemporary technology, we see Agents being designed as non-player characters in games, chat bots and voice assistants.
In computing history, there is a reoccurring vision of ‘personal agents’. This sometimes takes the guise of a ‘digital butler’ or ‘personal assistant’, to whom some tasks and responsibilities can be delegated. In this document, these aren’t the kind of agents we refer to.
↑The qualities one might expect to see in an agent is that of temporality, sociality and relatedness. And agent doesn’t necessarily have to have all of these qualities or have them to a high degree. But they are clear distinguishing factors from other models of interactivity.
Agents have temporality, affected by what has happened in the past. For humans, an example might be mood, which colours our whole world, altering how we carry ourselves, how we express and how things affect us.
Temporality is not exclusive to Agents. Things can exhibit a material mode of temporality. For example a piece of paper scrunched into a ball and opened out again carries the creases and folds from before. Material that wears out, or gives out, an elastic that no longer contracts with the same speed and force. And material can break, ruined not just by wear, but by freak event - a glass jar dropped onto concrete.
The most obvious extension of Agent temporality is future. Agents strive toward balance and satisification of desires, they plot and orchestrate schemes, large and small, they anticipate. While Things and Tools can be imprinted by past use, past circumstances, they are always only ever met in the present.
Agents have situatedness insofar as they are affected not just by direct forces, but by what’s going on in the surrounding situation.
This can include local physical forces, such as the Thing which might be affected by a physical touch as well as forces like wind and temperature. It can also include interaction between Things, such as a flame to a wick of a candle, or the knocking of one domino against another.
The agent however can be affected by higher-level interactions, and might be said to have sociality. It is a member of the social order, being ‘read by’ and ‘reading’ the interactions amongst agents. ‘Reading’ can be explicit, where one might purposefully observe another, but we should also consider implicit participation and affect.
Consider, for example how an intelligent dog might behave in relation to its owner, when the owner meets someone. If the owner expresses intimacy, the dog might relax. If the owner expresses suspicion or fear, the dog might likewise be on guard. The dog does not necessarily mirror the owners’ vibe, but on some level would seem to fold it into their own interpretation.
For the most part, Tools and Things are blind to who or what uses or affects them. They maintain no special status between this user or that user. Agents on the other hand can have relationality. They may react differently if that other agent is perceived, or whilst that agent interacts with them. Or perhaps if a and b are present, an Agent will be affected differently than if it was a or b alone.
In digital artifacts, we glimpse a basic form of relationality. For example, consider a computer set up for multiple users. Signing in with one user might change the desktop wallpaper and colour theme, open the last-used apps and have that user’s documents ready. The form of the artifact changes in relation to its ‘recognising’ a particular user. Function may change to a degree, as the operating system and apps might maintain separate settings for each user. Consequentially, behaviour might also change, for example whether menus should animate when displaying, or display instantly. Typically, these change will be very minor.
This agent models the Circumplex model of emotion. Rather than discrete states such as ‘happy’ or ‘sad’, it has bipolar (-1..1) values for valence and arousal. It also has a scalar (0..1) value for ‘energy’.
Clicking ‘reward’ pushes the valence in a positive direction. The opposite for ‘punish’. ‘Surprise’ gives arousal a boost. The ‘ambient stimulation’ is meant to represent the environment the agent is in. If it’s in the middle, it’s a neutral environment. Moving the slider makes the environment stimulating or draining, in terms of arousal.
The energy of the agent determines how much arousal it can sustain, or when there’s low arousal, energy recharges. Valence slowly drifts to the middle neutral position.
Try this:
This illustrates how even a simple model can react to the same input differently depending on the environment and its own state.
To keep it simple, this example doesn’t do much in terms of behaviour. You could imagine the core three state variables (valence, arousal & energy) being used to choose or modulate behaviour.
Examples:
Naturally this is an extremely reductive view of emotion, both in terms of the model used as well as how these are modulated in the demo.
See also: Implementing Agents
↑