Stochastic models of trust Decentralised multi-agent learning for social dynamics
| Authors | |
|---|---|
| Supervisors | |
| Cosupervisors | |
| Award date | 26-11-2025 |
| ISBN |
|
| Number of pages | 213 |
| Organisations |
|
| Abstract |
This dissertation presents stochastic models for studying trust and opinion dynamics in multi-agent systems. It consists of two parts. The first focuses on trust, beginning with models of agents learning to trust an institution of uncertain reliability. Individual agents update their beliefs using Bayesian learning, and analytical results are obtained for trust loss probabilities and expected times to distrust. The model is extended to interacting agents, where varying levels of communication influence collective trust. Simulations show that limited communication can, under some conditions, promote trust in reliable institutions.
The work then examines interpersonal trust through stochastic coordination games, where agents learn to trust or doubt based on repeated interactions. Analytical and simulation results show convergence of populations toward uniform trust or doubt. The model is further extended to a networked “all-or-nothing” public goods setting, where network density and structure are shown to affect both transient and long-run trust levels. The second part studies opinion dynamics. A stochastic compartmental model treats trust in institutions as an opinion contagion process, revealing how demographic and interaction parameters influence the persistence of distrust. Another model introduces agents whose opinions evolve through both personal experience and social influence, allowing for self-reflective opinion change. The final model uses reinforcement learning to describe opinion formation, showing that asymmetric learning leads to absorbing consensus, while symmetry brings ergodicity. |
| Document type | PhD thesis |
| Language | English |
| Downloads | |
| Permalink to this page | |
