Abstract
The use of multi-agent systems to solve large-scale problems can be an effective method to reduce physical and computational burdens; however, these systems should be robust to sub-system failures. In this work, we consider the problem of designing utility functions, which agents seek to maximize, as a method of distributed optimization in resource allocation problems. Though recent work has shown that optimal utility design can bring system operation into a reasonable approximation of optimal, our results extend the existing literature by investigating how robust the system’s operation is to defective agents and by quantifying the achievable performance guarantees in this setting. Interestingly, we find that there is a trade-off between improving the robustness of the utility design and offering good nominal performance. We characterize this trade-off in the set of resource covering problems and find that there are considerable gains in robustness that can be made by sacrificing some nominal performance.





Similar content being viewed by others
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Notes
Because we consider a worst-case analysis, we cannot differentiate between the role of two agents, as if we designed the utility rule of agent i and j differently, a problem may be realized where their roles are reversed. If the system designer had full knowledge of the problem structure, designing agents’ utilities heterogeneously can certainly help; however, it is currently unknown as to whether player specific utility functions can help in improving worst-case performance guarantees across a class of problem instances. Additionally, adopting a local utility rule that is consistent for each player lets us maintain the potential game structure.
This can be seen by transforming each game \(G \in \mathcal {G}\) into one with two actions by removing all actions but the worst equilibrium \(a^\mathrm{Ne}\) and the optimal allocation \(a^\mathrm{opt}\). Because \(a^\mathrm{Ne}\) remains a Nash equilibrium, the price of anarchy is unchanged.
References
Bertizzolo L, D’Oro S, Ferranti L et al (2020) SwarmControl: an automated distributed control framework for self-optimizing drone networks. In: IEEE INFOCOM 2020—IEEE conference on computer communications, pp 1768–1777. https://doi.org/10.1109/INFOCOM41043.2020.9155231
Brown GW (1951) Iterative solution of games by fictitious play. In: Koopmans TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 374–376
Brown R, Rossi F, Solovey K et al (2021) On local computation for network-structured convex optimization in multi-agent systems. IEEE Trans Control Netw Syst. https://doi.org/10.1109/TCNS.2021.3050129
Bullo F, Frazzoli E, Pavone M et al (2011) Dynamic vehicle routing for robotic systems. Proc IEEE 99(9):1482–1504. https://doi.org/10.1109/JPROC.2011.2158181
Chandan R, Paccagnan D, Marden JR (2019) When Smoothness is not enough: toward exact quantification and optimization of the price of anarchy. In: Proceedings of the IEEE conference on decision and control, pp 4041–4046. arXiv:1911.07823v2
Gairing M (2009) Covering games: approximation through non-cooperation. In: Internet and economics, pp 184–195. https://doi.org/10.1007/978-3-642-10841-9_18
Goemans M, Li E, Mirrokni V et al (2004) Market sharing games applied to content distribution in ad-hoc networks. pp 55–66. https://doi.org/10.1145/989459.989467
Grimsman D, Ali MS, Hespanha JP et al (2018) Impact of information in greedy submodular maximization. In: 2017 IEEE 56th Annual conference on decision and control, CDC 2017 2018-Janua(Cdc), pp 2900–2905. https://doi.org/10.1109/CDC.2017.8264080
Hannan J (1957) Approximation to Bayes risk in repeated plays. Contribution to the theory of games, 3rd edn. Princeton University Press, Princeton, pp 97–139
Jaleel H, Abbas W, Shamma JS (2019) Robustness of stochastic learning dynamics to player heterogeneity in games. In: 2019 IEEE 58th conference on decision and control (CDC), pp 5002–5007. https://doi.org/10.1109/CDC40024.2019.9029471
Kitano H, Tadokoro S, Noda I et al (1999) RoboCup Rescue: search and rescue in large-scale disasters as a domain for autonomous agents research. In: IEEE SMC’99 conference proceedings. 1999 IEEE international conference on systems, man, and cybernetics (Cat. No.99CH37028), vol 6, pp 739–743. https://doi.org/10.1109/ICSMC.1999.816643
Li N, Marden JR (2013) Designing games for distributed optimization. IEEE J Sel Top Sign Proces 7(2):230–242. https://doi.org/10.1109/JSTSP.2013.2246511
Marden JR (2017) The role of information in distributed resource allocation. IEEE Trans Control Netw Syst 4(3):654–664. https://doi.org/10.1109/TCNS.2016.2553363
Mei W, Friedkin NE, Lewis K et al (2016) Dynamic models of appraisal networks explaining collective learning. In: 2016 IEEE 55th conference on decision and control, CDC 2016, pp 3554–3559. https://doi.org/10.1109/CDC.2016.7798803, arXiv:1609.09546
Nugraheni CE, Abednego L (2016) Multi agent hyper-heuristics based framework for production scheduling problem. In: 2016 International conference on informatics and computing (ICIC), pp 309–313. https://doi.org/10.1109/IAC.2016.7905735
Paccagnan D, Gairing M (2021) In congestion games, taxes achieve optimal approximation. arXiv:2105.07480v1
Paccagnan D, Chandan R, Marden JR (2018) Distributed resource allocation through utility design—Part I: optimizing the performance certificates via the price of anarchy. CoRR. arXiv:1807.01333
Paccagnan D, Chandan R, Ferguson BL et al (2021) Optimal taxes in atomic congestion games. ACM Trans Econ Comput. https://doi.org/10.1145/3457168
Roughgarden T (2012) Intrinsic robustness of the price of anarchy. Commun ACM 55(7):116–123. https://doi.org/10.1145/2209249.2209274
Salazar M, Tsao M, Aguiar I et al (2019) A congestion-aware routing scheme for autonomous mobility-on-demand systems. In: European control conference
Shamma JS (2007) Cooperative control of distributed multi-agent systems. Wiley Online Library, New York
Vetta A (2002) Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions. In: The 43rd annual IEEE symposium on foundations of computer science, 2002 proceedings, pp 416–425. https://doi.org/10.1109/SFCS.2002.1181966
Yildiz E, Acemoglu D, Ozdaglar A et al (2011) Discrete Opinion dynamics with stubborn agents. Public choice: analysis of collective decision-making eJournal
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work is supported in part by ONR Grant #N00014-20-1-2359, AFOSR Grants #FA9550-20-1-0054 and #FA9550-21-1-0203, the Army Research Lab through ARL DCIST CRA W911NF-17-2-0181.
This article is part of the topical collection “Multi-agent Dynamic Decision Making and Learning” edited by Konstantin Avrachenkov, Vivek S. Borkar and U. Jayakrishnan Nair.
Appendices
Appendix A : Proof of Proposition 1
We note that this proof follows similarly to that of [5, 17] but now with the presence of stubborn agents. Here we go through the construction of the linear program and important steps of the proof, but direct the reader to [5, 17] for a more detailed explanation. We begin by identifying the problem of characterizing the price of anarchy in a class of games \(\mathcal {G}^m_{{\mathcal {W}},{\mathcal {F}}}\) for a class of games with a single basis value function w (i.e., \({\mathcal {W}} = \{ \alpha w~\vert ~\alpha >0\}\)) while using a local utility rule \(f = {\mathcal {F}}(w)\). Each resource welfare function can therefore be written as \(w_r(x) = v_r w(x)\), where \(v_r\) is the ‘value’ of that specific resource. We will discuss at the end how the solution to a single basis function can extend to the original statement. In looking for price of anarchy bounds we note that a class of resource covering problems \(\mathcal {G}\) with utility rule f has the same price of anarchy as the class of problems \(\mathcal {G}^*\) where each agent has exactly two actions \({\mathcal {A}}_i = \{a_i^\mathrm{Ne}, a_i^\mathrm{opt}\}\),Footnote 2 thus we will search for price of anarchy bounds in these two-action games and note they hold more generally. The price of anarchy over \(\mathcal {G}^m_{{\mathcal {W}},{\mathcal {F}}}\) while utilizing utility rule f, \(\mathrm {PoA}(\mathcal {G}^m_{{\mathcal {W}},{\mathcal {F}}})\), can be written as
where \(G = (N,{\mathcal {A}},\{U_i\}_{i \in N},W)\) encodes all of the information about a problem instance. This program is not efficient to solve in general, however, we will make use of a parameterization that will greatly ease the computation of the price of anarchy. First, we will modify (A1) by normalizing \(W(a^\mathrm{Ne})=1\), which can be done by homogeneously scaling each resource value and will not alter the problems price of anarchy. Next, we relax the equilibrium constraint from holding for every agent \(i \in N\) to only hold as a summation over all agents, i.e., \(\sum _{i \in N} U_i(a^\mathrm{Ne};d) - U_i(a^\mathrm{opt}_i,a^\mathrm{Ne}_{-i};d) \ge 0\). Note, that this relaxation will cause the new program to provide a lower bound for the original, however we will show that this bound is tight. Finally, we take the reciprocal of the objective and turn the minimization problem into a maximization problem. The new program, which the solution of will be a lower bound for the original, can be written
Now, we make use of a parameterization that was also described in the proof of Theorem 2. Each resource is given a label \((x_r,y_r,z_r,d_r)\) defined by \(x_r = \vert a^\mathrm{Ne}\backslash a^\mathrm{opt}\vert _r\), \(z_r = \vert a^\mathrm{opt}\backslash a^\mathrm{Ne}\vert _r\), \(y_r = \vert a^\mathrm{opt}\cap a^\mathrm{Ne}\vert _r\), and \(d_r\) is the number of stubborn agents. The label denotes the number of agents that utilize a resource in only their Nash action \(x_r\), only their optimal action \(z_r\), or both \(y_r\). The set of all such labels is \({\mathcal {I}}_n = \{(x,y,z)\in {\mathbb {N}}_{\ge 0}^3~\vert ~1\le x+y+z \le n \}\). For each label we define a parameter \(\theta (x,y,z,d) = \sum _{r \in {\mathcal {R}}(x,y,z,d)}v_r\), where \({\mathcal {R}}(x,y,z,d)\) is the set of resources with label (x, y, z, d). We can express several quantities using this parameterization as follows:
Note that we write the sum over all labels in \({\mathcal {I}}_n\) as \(\sum _{x,y,z,d}\) for brevity. Rewriting (A2) using this parameterization gives
As discussed when introducing (A2), \(p^\star \) offers a lower bound on the price of anarchy. We further show that using the solution to (A3), \(\theta ^\star \), one can construct a game whose price of anarchy is a tight upper bound.
For each label (x, y, z, d) such that \(\theta ^\star (x,y,z,d)>0\), introduce n resources each with value \(\theta ^\star (x,y,z,d)/n\). As in Fig. 6, define each players action set to cover \(x+y\) of these resources in their equilibrium action \(a^\mathrm{Ne}_i\) and \(z+y\) of these resources in their optimal action \(a^\mathrm{opt}_i\) where y of the resources are in both actions. By considering the n resources in a ring, and offsetting each agents action sets by one resource, each agent can experience this set of resources symmetrically. Finally, let d stubborn agents be placed on each of these resources. If this is repeated for each label, then one can observe that player i will have utility
where the inequality holds from the constraint in (A3) and \(\theta ^\star \) being a feasible solution; thus \(a^\mathrm{Ne}\) is a Nash equilibria, and the price of anarchy of this game is at most \( \frac{W(a^\mathrm{Ne})}{W(a^\mathrm{opt})} = \frac{\sum _{x,y,z,d} w(x+y)\theta ^\star (x,y,z,d)}{\sum _{x,y,z,d} w(z+y)\theta ^\star (x,y,z,d)} = \frac{1}{\sum _{x,y,z,d} w(z+y)\theta ^\star (x,y,z,d)} = \frac{1}{p^\star }\), where the second equality holds from the constraint in (A3). The constructed game therefore offers an upper bound on the price of anarchy of \(1/p^\star \), the solution to (A3), offers a matching lower bound, proving the bound is tight.
Game construction for resource allocation problems utilizing the solution to (A1). For each tuple (x, y, z, d), n resources are created with value \(\theta ^\star (x,y,z,d)/n\). For a resource with label (x, y, z, d), design the action set of agent i to utilize the first \(x+y\) of these resources in their first action, \(a^\mathrm{Ne}_i\), and the \(x+1\) to \(x+y+z\) resources in their other action \(a^\mathrm{opt}_i\). For the proceeding agent, follow the same process but increasing the index of the starting resource by 1. If the agent were to use the non-existent \(n+1\) or greater resource, start the assignment from the beginning, essentially forming a ring. Once each action set is assigned for all n agents, each resource will be used by \(x+y\) agents in the action \(a^\mathrm{Ne}\) and \(y+z\) agents in \(a^\mathrm{opt}\), matching the label it was assigned. We can observe that \(a^\mathrm{Ne}\) is a Nash equilibrium from the constraints in (A1). For a similar, but more detailed explanation, see [17]
Notice that (A3) is a linear program with decision variable \(\theta \). Next we find the dual of (A3) to be
Because (A3) is a linear program, and thus convex, by the principle of strong duality, \(d^\star = p^\star = \mathrm {PoA}(\mathcal {G}^m_{{\mathcal {W}},{\mathcal {F}}})^{-1}\). Finally, to optimize the price of anarchy over local utility rules, we need only minimize (A4) over \(f:[n+m]\rightarrow {\mathbb {R}}_{\ge 0}\), which can be treated as a vector in \({\mathbb {R}}^{n+m}\). Allowing f to be a decision variable in (A4) would cause each constraint to be bilinear in f(i) and \(\lambda \); however, every occurrence of \(\lambda \) is multiplied by an f(i) for some i and vice versa, and therefore, the two decision variables can be combined into one giving a program of the form (9).
Finally, we note that (i) an optimal utility rule can be composed as the optimal utility rule for each basis function, i.e., for a resource with value \(w_r = \sum _{b=1}^B \alpha _b w_b\) for some \(\{\alpha _b\}_{b=1}^B\), then \({f}^\mathrm{opt}_r = \sum _{b=1}^B \alpha _b {f}^\mathrm{opt}_b\) where \({f}^\mathrm{opt}_b\) is the optimal utility rule for the basis function \(w_b\) described prior, and (ii) the worst case price of anarchy over the set of games with resource value functions in \({\mathcal {W}} = \{\sum _{b=1}^B \alpha _b w_b \vert \alpha _b \ge 0~\forall b\in [B]\}\), is equal to the maximum of the sets of games with just one of these basis functions. These two observations have been shown in [5, 18] and follow identically here. This gives the final form of the optimal local utility design and the associated performance guarantee. \(\square \)
Appendix B: Proof of Theorem 2
In this appendix, we give the full proof of Theorem 2 as well as several supporting lemmas. As in the proof of Proposition 1, we restrict our search to games where each player has two actions \({\mathcal {A}}=\{a^\mathrm{Ne}_i,a^\mathrm{opt}_i\}\) and note that the price of anarchy over this class is the same as the original with larger agent action sets. The price of anarchy bounds in (13) and (14) is tight along the Pareto-optimal frontier. To prove that each is an upper bound, we will make use of several examples; three structures of parameterized problem instances are shown in Fig. 7a–c. To show that these are lower bounds, we will make use of smoothness inequalities introduced in [19]. If, given a utility rule f, each Nash equilibria \(a^\mathrm{Ne}\in \mathrm{NE}(G_f)\) satisfies
for some \(\lambda ,\mu \in {\mathbb {R}}\), then the price of anarchy will satisfy \(\mathrm {PoA}(\mathcal {G}_f) \ge \frac{\lambda }{1-\mu }\). We will provide lower bounds by finding values of \(\lambda \) and \(\mu \) for different settings (e.g., with and without stubborn agents); often, to do so, we will utilize the fact that the welfare of a Nash equilibria can be lower bounded by
which holds from the definition of a Nash equilibrium (1) where \(u_i(a^\mathrm{opt}_i,a^\mathrm{Ne}_{-i}) \le u_i(a^\mathrm{Ne})\) for all \(i \in N\), implying \(\sum _{i \in N}u_i(a^\mathrm{opt}_i,a^\mathrm{Ne}_{-i}) - \sum _{i \in N}u_i(a^\mathrm{Ne}) \le 0\). Additionally, using the parameterization discussed in Section 3, where, in an allocation \((a,\overline{a})\), each resource \(r \in {\mathcal {R}}\) is given a label \((x_r,y_r,z_r,d_r)\) defined by \(x_r = \vert a^\mathrm{Ne}\backslash a^\mathrm{opt}\vert _r\), \(z_r = \vert a^\mathrm{opt}\backslash a^\mathrm{Ne}\vert _r\), \(y_r = \vert a^\mathrm{opt}\cap a^\mathrm{Ne}\vert _r\), and \(d_r\) is the number of stubborn agents, where for two joint actions \(a,a^\prime \in {\mathcal {A}}\), \(\vert a\backslash a^\prime \vert _r\) is the number of agents that utilize resource r in action a but not \(a^\prime \) and \(\vert a\cap a^\prime \vert _r\) is the number of agents that utilize resource r in both a and \(a^\prime \). This parameterization allows us to write \(W(a^\mathrm{Ne}) = \sum _{r \in {\mathcal {R}}} v_r \mathbbm {1}_{[x_r+y_r]}\) and \(W(a^\mathrm{opt}) = \sum _{r \in {\mathcal {R}}} v_r \mathbbm {1}_{[y_r+z_r]}\); additionally, (B6) can be rewritten as
where the welfare function \(w(x) = \mathbbm {1}_{[x]}\) is the indicator function that the argument is greater than zero in covering games. Manipulating the right hand side of (B7) into the form of (B5) will be the primary method of lower bounding the price of anarchy of a utility rule f in a class of games.
From [6], the optimal utility rule in covering games with no stubborn agents and arbitrarily many regular agents \(\mathcal {G}^0\) is
and \(f^0(0)=0\). This can also be seen by taking n to infinity in (12). The performance guarantee of \(f^0\) is \(\mathrm {PoA}(\mathcal {G}^0_{f^0}) = 1-\frac{1}{e}\), which can be seen from the following lemma.
Lemma 1
(Gairing [6]) In the class of problems \(\mathcal {G}^0\), with utility rule
Equation (B5) is satisfied with \(\lambda = 1\) and \(\mu = -1/(e-1)\).
a Example A: \(G^A\) A problem instance with one agent having two choices: a resource with value one and a resource with value \(\gamma \) covered by \(\eta \) defective agents. When \(\gamma \le 1/f(\eta + 1)\), the agent may pick the resource of value one in equilibrium leading to \(\mathrm {PoA}(G^A_f) = \frac{1}{\gamma }\ge f(\eta + 1)\). b Example B: \(G^B\) A problem with \(\xi + 1\) agents each with two choices: selecting \(\xi \) resources of value 1, \(a^\mathrm{Ne}_i\), or one resource of value \(\gamma \) with \(\eta \) defective agents, \(a_i^\mathrm{opt}\). The agents’ equilibrium and optimal actions are distinct from one another, implying in the allocation \(a^\mathrm{Ne}\), xi agents cover each resource of value 1, and in \(a^\mathrm{opt}\), each resource of value \(\gamma \) is covered by one agent. When \(\gamma \le \frac{\xi f(\xi )}{f(\eta + 1)}\), \(a^\mathrm{Ne}\) is an equilibrium allocation with \(\mathrm {PoA}(G^B_f) = \frac{1}{\gamma }\). c Example C: \(G^C\) A problem with \(\xi + 1\) agents each with two choices: selecting \(\xi \) resources of value 1, \(a^\mathrm{Ne}_i\), or the remaining resource of value 1 and one resource of value \(\gamma \) with \(\eta \) defective agents, \(a_i^\mathrm{opt}\). The agents’ equilibrium and optimal actions are distinct from one another, implying in the allocation \(a^\mathrm{Ne}\), \(\xi \) agents cover each resource of value 1, and in \(a^\mathrm{opt}\), every resource is covered by one agent. When \(\gamma \le \frac{\xi f(\xi )-f(\xi +1)}{f(\eta + 1)}\), \(a^\mathrm{Ne}\) is an equilibrium allocation with \(\mathrm {PoA}(G^C_f) = \frac{1}{1+\gamma }\)
This utility rule is useful in constructing the optimal utility rules in the setting with stubborn agents. Additionally, the following claim is useful in proving several lower-bounds.
Lemma 2
The local utility rule \(f^0\) defined in (B8) satisfies
Proof
The claim can be proven directly by substitution:
\(\square \)
The following several lemmas will define and quantify the smoothness coefficients of some useful local utility rules.
Lemma 3
In the class of problems \(\mathcal {G}^m\), with utility rule
Equation (B5) is satisfied with \(\lambda = f^0(m+1)\) and \(\mu = 1-mf^0(m)\).
Proof
Let \({\mathcal {R}}_a \subset {\mathcal {R}}\) be the set of all resources where \(x_r+y_r+d_r \ge m+1\) and let \({\mathcal {R}}_b \subset {\mathcal {R}}\) be the set of all resources where \(x_r+y_r+d_r \le m\), forming a partition of \({\mathcal {R}}\).
For the resources in \({\mathcal {R}}_a\),
where (B12a) and (B12d) hold from \(f^0\) decreasing, (B12b) holds from \(f^0\) positive, and (B12c) and (B12e) hold from Lemma 2.
For the resources in \({\mathcal {R}}_b\),
where (B13b) holds from Lemma 2 and (B13c) holds from \(x_r/(x_r+y_r+d_r)\le 1\), providing the same lower bound for the price of anarchy.
It follows that \(\lambda = f^0(m+1)\) and \(\mu = 1-mf^0(m)\) satisfy (B5). \(\square \)
Lemma 4
In the class of problems \(\mathcal {G}^0\), with utility rule
Equation (B5) is satisfied with \(\lambda = mf^0(m)\) and \(\mu = \frac{e-2}{e-1}-mf^0(m)\).
Proof
Let \({\mathcal {R}}_c \subset {\mathcal {R}}\) denote the set of resources where \(x_r >0 \) or \(y_r>0\), and let \({\mathcal {R}}_d \subset {\mathcal {R}}\) be the set of resources where \(x_r=y_r=0\). First recall the bound from (B12e) and (B13d) that together give
in the special case where \(d_r = 0\), as is the case for games the class \(\mathcal {G}^0\). For the set \({\mathcal {R}}_c\),
where (B17a) holds from Lemma 2, (B17b) holds from \(\mathbbm {1}_{[x]} \le x\) for all non-negative integer x, and (B17c) holds from definition of \({\mathcal {R}}_c\) that \(\mathbbm {1}_{[x_r+y_r]} = 1\). For the remaining resources in \({\mathcal {R}}_d\),
where (B18a) holds from the definition of \({\overline{f}}^m\) and \(mf^0(m) < f^0(1)=1\), and (B18b) holds from \(f^0\) positive and \(x_r=y_r=0\). From (B17c) and (B18b), \(\lambda = mf^0(m)\) and \(\mu = \frac{e-2}{e-1}-mf^0(m)\) satisfy (B5). \(\square \)
Lemma 5
In the class of problems \(\mathcal {G}^m\), with utility rule
Equation (B5) is satisfied with \(\lambda = f^0(m+1)\) and \(\mu = 0\).
Proof
As in Lemma 3, let \({\mathcal {R}}_a \subset {\mathcal {R}}\) be the set of all resources where \(x_r+y_r+d_r \ge m+1\) and let \({\mathcal {R}}_b \subset {\mathcal {R}}\) be the set of all resources where \(x_r+y_r+d_r \le m\), forming a partition of \({\mathcal {R}}\). For the resources in the set \({\mathcal {R}}_a\), follow the steps of (B12a)–(B12e) and note that \(jf^0(j) \le 1\) for all j, therefore (B12e) is further lower-bounded by
For the resources in \({\mathcal {R}}_b\),
where (B21a) holds from \(f^0\) decreasing, (B21b) holds from \(jf^0(j) \le 1\) for all \(j \ge 0\), and (B21c) holds from \(\mathbbm {1}_{[x]} \le x\) for all non-negative integer x. From (B20) and (B21c), \(\lambda = f^0(m+1)\) and \(\mu = 0\) satisfy (B5). \(\square \)
Proof of Theorem 2
To prove that the curve defined by (13) and (14) represents a Pareto-optimal frontier of the multi-criterion problem of minimizing \(\mathrm {PoA}(\mathcal {G}^m_f)\) and \(\mathrm {PoA}(\mathcal {G}^0_f)\), we first give a parameterized utility rule that draws the curve then show a tight lower and upper bound on it’s price of anarchy, and finally show this utility rule is indeed Pareto-optimal. Let \(f^t(j) = t{\overline{f}}^m(j) + (1-t)f^0(j)\) for some \(t \in [0,1]\), be a local utility rule parameterized by \(t \in [0,1]\). Through some rearranging, this is equivalent to (15). We will show the price of anarchy guarantees of this utility rule draw the Pareto-optimal frontier.
Part 1 Upper Bound We will give problem instances that upper bound the price of anarchy over the set \(\mathcal {G}^m\) and \(\mathcal {G}^0\) for the utility rule \(f^t\). For the nominal price of anarchy, let \(G^C \in \mathcal {G}^0\) be a covering game as described in Fig. 7c with \(\eta = 0\), \(\gamma = \frac{\xi f(\xi ) - f(\xi +1)}{f(1)}\). By selecting \(\xi \ge m+1\) agents in the game (where m is the number of defective agents for which \(f^t\) is designed), from Lemma 2
Defining \(\Gamma _m = mf^0(m)-1 = m!\frac{e-\sum _{i=0}^{m-1} \frac{1}{i!}}{e-1} -1\), the price of anarchy of the described game is
Because \(G^C \in \mathcal {G}^0\), \(\mathrm {PoA}(\mathcal {G}^0_{f^t}) \le \mathrm {PoA}(G^C_{f^t})\). For the price of anarchy in the perturbed agent setting, let \(G^A \in \mathcal {G}^m\) be a covering game as described in Fig. 7a with \(\eta = m\) and \(\gamma = f^t(1)/f^t(m+1)\). From the definition of \(f^t\) and Lemma 2, the price of anarchy of this game, with utility rule \(f^t\) is
Because \(G^A \in \mathcal {G}^m\), \(\mathrm {PoA}(\mathcal {G}^m_{f^t}) \le \mathrm {PoA}(G^A_{f^t})\). This provides our upper bounds for the price of anarchy over \(\mathcal {G}^0\) and \(\mathcal {G}^m\) while using the utility rule \(f^t\).
Part 2 Lower Bound To lower bound the price of anarchy, we again look for coefficients \(\lambda , \mu \) that satisfy (B5). From the definition of \(f^t\), (B7) can be rewritten
where \({\overline{f}}^m\) is as defined in (B11). For any game in \(\mathcal {G}^m\), from Lemmas 3 and 5, (B22) can be lower bounded by
producing for the lower bound on the price of anarchy of
For the price of anarchy over the nominal setting \(\mathcal {G}^0\) with utility law \(f^t\), (14) needs to be lower bounded for the case where \(d_r = 0\) for all \(r \in {\mathcal {R}}\). From Lemmas 4 and 1, this lower bound is
This gives a lower bound on the nominal price of anarchy while using \(f^t\) of
Part 3 Pareto-Optimality Consider a local utility rule f with nominal price of anarchy guarantee
for some \(x \in [0,1]\). Consider a game \(G^C \in \mathcal {G}^0\) following Fig. 7c where \(\eta = 0\) and \(\xi = m+1\). If \(\gamma = ((m+1)f(m) - f(m+2))/f(1)\), then
from the assumption that \(f(j) = f^0(j) \ \forall j\ge m+1\) and Lemma 2. To satisfy the price of anarchy guarantee in (B24),
Now, consider the game \(G^A\in \mathcal {G}^m\) described in Fig. 7a where \(\eta = m\) and \(\gamma = f(1)/f(m+1) = f(1)/f^0(m+1)\). The price of anarchy of this game is \(\mathrm {PoA}(G^A_f) = 1/\gamma \). From (B25),
In (B26), choose \(x = \frac{(e-1)(1+t\Gamma _m)}{1+(e-1)(1+t\Gamma _m)}\) for some \(t \in [0,1]\) and
from the fact \(\Gamma _m = f^0(m+1) + \frac{1}{e-1}\). The monotonicity of each price of anarchy expression shows the logic is reversible, matching the theorem. A similar argument could be followed for other values of the utility rule. \(\square \)
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ferguson, B.L., Marden, J.R. Robust Utility Design in Distributed Resource Allocation Problems with Defective Agents. Dyn Games Appl 13, 208–230 (2023). https://doi.org/10.1007/s13235-022-00470-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13235-022-00470-y