WINE 2006
2nd international Workshop on
Internet & Network Economics

December 15 - 17 2006, Patra -- GREECE


LIST OF ACCEPTED PAPERS (ordered by submission numbers, with abstracts)

[Hide Abstracts]

Davide Bilò, Luciano Gualà and Guido Proietti. Dynamic Mechanism Design
Abstract: In this paper we address the question of designing truthful mechanisms for solving optimization problems on dynamic graphs with selfish edges. More precisely, we are given a graph $G$ of $n$ nodes, and we assume that each edge of $G$ is owned by a selfish agent. The strategy of an agent consists in revealing to the system the cost for using its edge, but this cost is not constant and can change over time. Additionally, edges can enter into and exit from $G$. Among the various possible assumptions which can be made to model how this edge-cost modifications take place, we focus on two settings: (i) the \emph{dynamic}, in which modifications are unpredictable and time-independent, and for a given optimization problem on $G$, the mechanism has to maintain efficiently the output specification and the payment scheme for the agents; (ii) the \emph{time-sequenced}, in which modifications happens at fixed time steps, and the mechanism has to minimize an objective function which takes into consideration both the quality and the set-up cost of a new solution. In both settings, we investigate the existence of exact and approximate truthful mechanisms. In particular, for the dynamic setting, we analyze the \emph{minimum spanning tree} problem, and we show that if edge costs can only decrease, then there exists an efficient dynamic truthful mechanism for handling a sequence of $k$ edge-cost reductions having runtime ${\cal O}(h \log n +k \log^4 n)$, where $h$ is the overall number of payment changes.
Moshe Babaioff, Michal Feldman and Noam Nisan. Mixed Strategies in Combinatorial Agency
Abstract: We study a setting where a principal needs to motivate a team of agents whose combination of hidden efforts stochastically determines an outcome. In a companion paper we devise and study a basic ``combinatorial agency'' model for this setting, where the principal is restricted to inducing a pure Nash equilibrium. Here, we show that the principal may possibly gain from inducing a mixed equilibrium, but this gain can be bounded for various families of technologies (in particular if a technology has symmetric combinatorial structure). In addition, we present a sufficient condition under which mixed strategies yield no gain to the principal.
Martin Hoefer and Jean Cardinal. Selfish Service Installation in Networks
Abstract: We consider a scenario of distributed service installation in privately owned networks. Our model is a non-cooperative vertex cover game for k players. Each player owns a set of edges in a graph G and strives to cover each edge by an incident vertex. Vertices have costs and must be purchased to be available for the cover. Vertex costs can be shared arbitrarily by players. Once a vertex is bought, it can be used by any player to fulfill the covering requirement of her incident edges. Despite its simplicity, the model exhibits a surprisingly rich set of properties. We present a cumulative set of results including tight characterizations for prices of anarchy and stability, NP-hardness of equilibrium existence, and polynomial time solvability for important subclasses of the game. In addition, we consider the task of finding approximate Nash equilibria purchasing an approximation to the optimum social cost, in which each player can improve her contribution by selfish defection only by at most a certain factor. A variation of the primal-dual algorithm for minimum weighted vertex cover yields a guarantee of 2, which is shown to be tight.
Heiner Ackermann, Heiko Roeglin and Berthold Voecking. Pure Nash Equilibria in Player-Specific and Weighted Congestion Games
Abstract: Unlike standard congestion games, weighted congestion games and congestion games with player-specific delay functions do not admit pure Nash  equilibria in general. It is known, however, that there exist pure equilibria  for both of these variants in the case of singleton games, i.e., if the  players' strategy spaces contain only sets of cardinality one. In this paper,  we investigate how far such a property on the players' strategy spaces  guaranteeing the existence of pure equilibria can be extended. We show that  both weighted and player-specific congestion games admit pure equilibria in the  case of matroid games, i.e., if the strategy space of each player  consists of the bases of a matroid on the set of resources. We can also  show that the matroid property is the maximal condition on the players'  strategy spaces that guarantees pure equilibria without taking into account how  the strategy spaces of different players are interweaved. In the case of  player-specific congestion games, our analysis of matroid games also yields a  polynomial time algorithm for computing pure equilibria.
Dominic Dumrauf and Martin Gairing. Price of Anarchy for Polynomial Wardrop Games
Abstract: In this work, we consider Wardrop games where traffic has to be routed through a shared network. Traffic is allowed to be split into arbitrary pieces and can be modeled as network flow. For each edge in the network there is a latency function that specifies the time needed to traverse the edge given its congestion. In a Wardrop equilibrium all used paths between a given source-destination pair have equal and minimal latency.  In this paper, we allow for polynomial latency functions with an upper bound $d$ and a lower bound $s$ on the degree of all monomials that appear in the polynomials. For this environment, we prove upper and lower bounds on the price of anarchy.
David Abraham, Ning Chen, Vijay Kumar and Vahab Mirrokni. Assignment Problems in Rental Markets
Abstract: Motivated by the dynamics of the ever-popular online movie rental business, we study a range of assignment problems in rental markets. The assignment problems associated with rental markets possess a rich mathematical structure and are closely related to many well-studied one-sided matching problems. We formalize and characterize the assignment problems in rental markets in terms of one-sided matching problems, and consider several solution concepts for these problems. In order to evaluate and compare these solution concepts (and the corresponding algorithms), we define some "value" functions to capture our objectives, which include fairness, efficiency and social welfare. Then, we bound the value of the output of these algorithms in terms of the chosen value functions.  We also consider models of rental markets corresponding to static, online, and dynamic customer valuations. We provide several constant-factor approximation algorithms for the assignment problem, as well as hardness of approximation results for the different models. Finally, we describe some experiments with a discrete event simulator compare the various algorithms in a practical setting, and present some interesting experimental results.
Nicole Immorlica, Kamal Jain and Mohammad Mahdian. Game-Theoretic Aspects of Designing Hyperlink Structures
Abstract: We study the problem of designing the hyperlink structure between the web pages of a web site in order to maximize the revenue generated from the traffic on the web site. We show this problem is equivalent to the well-studied setting of Markov Decision Processes (MDPs). Thus existing results from that literature imply the existence of polynomial-time algorithms for finding the optimal hyperlink structure, as well as a linear program (LP) to describe the optimal structure. We design another LP which also describes the optimal structure and use it to address our problem (and, by extension all of MDPs) from the perspective of cooperative game theory: if each web page is controlled by an autonomous agent, is it possible to give the individuals and coalitions incentive to cooperate and build the optimal hyperlink design? We study this question in the settings of transferrable utility (TU) and non-transferrable utility (NTU) games. In the TU setting, we use linear programming duality to show that the core of the game is non-empty. For the NTU setting, we show that if we allow “mixed” strategies, the core of the game is non-empty, but there are examples that show that the core can be highly inefficient.
Pradeep Dubey, Rahul Garg and Bernard De Meyer. Competing for Customers in a Social Network: The Quasi-Linear Case
Abstract: There are many situations in which a customer's proclivity to buy the product of any firm depends not only on the classical attributes of the product such as its price and quality, but also on who else is buying the same product. We model these situations as games in which firms compete for customers located in a ``social network''. Nash Equilibrium (NE) in pure strategies exist and are unique.  Indeed there are closed-form formulae for the NE in terms of the exogenous parameters of the model, which enables us to compute NE in polynomial time.  An important structural feature of NE is that, if there are no a priori biases between customers and firms, then there is a cut-off level above which high cost firms are blockaded at an NE, while the rest compete uniformly throughout the network.    We finally explore the relation between the connectivity of a customer and the money firms spend on him. This relation becomes particularly transparent when externalities are dominant: NE can be characterized in terms of the invariant measures on the recurrent classes of the Markov chain underlying the social network.
Spyros Kontogiannis, Panagiota Panagopoulou and Paul Spirakis. Polynomial Algorithms for Approximating Nash Equilibria of Bimatrix Games
Abstract: We focus on the problem of computing an $\epsilon$-Nash equilibrium of a bimatrix game, when $\epsilon$ is an absolute constant.  We present a simple algorithm for computing a $\frac{3}{4}$-Nash equilibrium for any bimatrix game in strongly polynomial time and we next show how to extend this algorithm so as to obtain a (potentially stronger) parameterized approximation. Namely, we present an algorithm that computes a $\frac{2+\lambda+\epsilon}{4}$-Nash equilibrium for any $\epsilon$, where $\lambda$ is the minimum, among all Nash equilibria, expected payoff of either player.  The suggested algorithm runs in time polynomial in $\frac{1}{\epsilon}$ and the number of strategies available to the players.
Ola Rozenfeld and Moshe Tennenholtz. Strong and Correlated Strong Equilibria in Monotone Congestion Games
Abstract: The study of congestion games is central to the interplay between computer science and game theory. However, most work in this context does not deal with possible deviations by coalitions of players, a significant issue one may wish to consider. In order to deal with this issue we study the existence of strong and correlated strong equilibria in monotone congestion games. Our study of strong equilibrium deals with monotone-increasing congestion games, complementing the results obtained by Holzman and Law-Yone on monotone-decreasing congestion games. We then present a study of correlated-strong equilibrium for both decreasing and increasing monotone congestion games.
Pradeep Dubey and Rahul Garg. Games of Connectivity
Abstract: We consider a communications network in which users transmit beneficial information to each other at a cost.  We pinpoint conditions under which the induced cooperative game is convex.  Our analysis is in a lattice-theoretic framework, which is at once simple and able to encompass a wide variety of seemingly disparate models.
Xi Chen, Xiaotie Deng and Shang-Hua Teng. Sparse Games Are Hard
Abstract: In this paper, we advance our understanding of the computation of Nash equilibria in two-player games. We consider sparse games in which each player has n pure strategies and the payoff matrices have at most O(n) number of non-zero entries. We prove that sparse games do not have a fully polynomial-time approximation scheme unless PPAD is in P. On the positive side, we give a polynomial-time algorithm for finding exact Nash equilibria for a class of sparse win-lose games.
Ping Li, Housheng Chen and Guangdong Huang. On Portfolio's Default-Risk-Adjusted  Duration and Value: Model and Algorithm Based on Copulas
Abstract: In this paper, we propose a new approach, copulas, to calculating the default-risk-adjusted duration and present value for a portfolio of bonds with default risk. A copula function is used to determine the default dependence structure and simulate correlated default time  from individual obligor's default distribution. This approach is verified to be easy and applicable by a numerical example, in which we demonstrate how to calculate the default-risk-adjusted duration and present value for a given portfolio. In the process we take into account of the  settlement time when default happens, the choice of copula function and the correlation between obligors, and make a sensitive analysis of the effects of Kendall's tau and copula functions on the default-risk-adjusted duration and present value. Results show that the duration and present value simulated from Gaussian copula fluctuates larger than that from Clayton and Gumbel copulas when Kendall's tau varies from zero to one.
Rahul Garg and Sanjiv Kapoor. Price Roll-Backs and Path Auctions:  An Approximation Scheme for Computing the  Market Equilibrium
Abstract: In this paper we investigate the structure of prices in approximate solutions to the market equilibrium problem. The bounds achieved allow a scaling approach  for computing market equilibrium in the Fisher model. Our algorithm computes an exact solution and improves the complexity of previously known combinatorial algorithms for the problem. It consists of a price roll-back step combined with the auction steps. Our approach also leads to an efficient polynomial time approximation scheme. We also show a reduction from a flow problem to the market equlibrium problem, illustrating its inherent complexity.
Tian-Ming Bu, Qi Qi and Aries Wei Sun. Unconditional Competitive Auctions with Copy and Budget Constraints
Abstract: This paper investigates a new auction model in which bidders have both copy and budget constraints. The new model has extensive and interesting applications in auctions of online ad-words, software licenses, etc. We consider the following problem: Suppose all the participators are rational, how to allocate the objects at what price so as to guarantee auctioneer's high revenue, and how high it is.  We introduce a new kind of mechanisms called \emph{win-win mechanisms} and present the notion of \emph{unconditional competitive auctions}. A notably interesting property of \emph{win-win mechanisms} is that the self-interested strategies bring better utility not only to the buyers but also to the auctioneer. Then we present \emph{win-win mechanisms} for multi-unit auctions with copy and budget constraints. We prove that these auctions are \emph{unconditional competitive} under the situation of both limited and unlimited supply.
Deeparnab Chakrabarty, Nikhil Devanur and Vijay Vazirani. New Results on Rationality and Strongly Polynomial Time Solvability in Eisenberg-Gale Markets
Abstract: We study the structure of EG[2], the class of Eisenberg-Gale markets with two agents. We prove that all markets in this class are rational and they admit strongly polynomial algorithms whenever the polytope containing the set of feasible utilities of the two agents can be described via a combinatorial LP. This helps resolve positively the status of two markets left as open problems by [JV]: the capacity allocation market in a directed graph with two source-sink pairs and the network coding market in a directed network with two sources.  Our algorithms for solving the corresponding nonlinear convex pro- grams are fundamentally diŽerent from those obtained by [JV]; whereas they use the primal-dual schema, we use a carefully constructed binary search.
Dorit Hochbaum. Ranking sports teams and the inverse equal paths problem
Abstract: The problem of rank aggregation has been studied in contexts varying from sports, to multi-criteria decision making, to machine learning, to academic citations, to ranking web pages, and to behavioral issues. The dynamics of the group aggregation of individual decisions has been a subject of central importance in both normative and descriptive decision theory.  We present here a new paradigm using an optimization framework that addresses major shortcomings in current models of aggregate ranking.   Different ranking methods provide different outcomes that are often criticized for being subjective and ignoring some factors or emphasizing others.  The difference in the proposed ranking scheme is that subjective considerations are made explicit and can be easily incorporated into the scheme.  The focus of the presentation is on the application of ranking sports competitors.  The problem of aggregate ranking ``optimally" is shown to be linked to a problem we call {\em the inverse equal paths problem}.  This framework enables the introduction of a specific performance measure for the quality of the aggregate ranking as per its deviations from the individual rankings observations.  The contributions include: A novel ranking scheme; a performance measure for the quality of an aggregate ranking; showing a link to the inverse paths problem; a polynomial time combinatorial algorithm for convex penalty functions of the deviations; NP-hardness of some forms of nonlinear penalty functions; and an algorithmic setup of the problem as a network flow problem.
Mikołaj Morzy and Adam Wierzbicki. The Sound of Silence: Mining Implicit Feedbacks to Compute Reputation
Abstract: A reliable mechanism for scoring the reputation of sellers is crucial for the development of a successful environment for customer-to-customer e-commerce. Unfortunately, most C2C systems utilize simple feedback-based reputation systems, that not only do not offer sufficient protection from fraud, but tend to overestimate the reputation of sellers by introducing strong bias toward maximizing the volume of sales at the expense of the quality of service.  In this paper we present a method that avoids the unfavorable phenomenon of overestimating the reputation of sellers by using implicit feedbacks. We introduce the notion of an implicit feedback and we propose two strategies for discovering implicit feedbacks. We perform a twofold evaluation of our proposal. To demonstrate the existence of the implicit feedback and to propose an advanced method of implicit feedback discovery we conduct experiments on a large volume of real-world data acquired from an online auction site. Next, a game-theoretic approach is  presented that uses simulation to show that the use of the implicit feedback can improve a simple reputation system such as used by eBay. Both the results of the simulation and the results of experiments prove the validity and importance of using implicit feedbacks in reputation scoring.
Pinyan Lu, Shang-Hua Teng and Changyuan Yu. Truthful Auctions with Optimal Profit
Abstract: We study the design of truthful auction mechanisms for maximizing the seller's profit. We focus on the case when the auction mechanism does not have any knowledge of bidders' valuations, especially of their upper bound. For the single-item auction, we obtain an ``asymptotically" optimal scheme: for any $ k\in Z^+, \epsilon >0$, we give a randomized truthful auction that guarantees an expected profit of  $ \Omega(\frac{OPT}{\ln OPT \ln\ln OPT \cdots (\ln^{(k)}OPT)^{1+\epsilon}})$, where $OPT$ is the maximum social utility of the auction. Moreover, we show that no truthful auction can guarantee an expected profit of $\Omega(\frac{OPT}{\ln OPT \ln\ln OPT\cdots \ln^{(k)}OPT})$.   In addition, we extend our results and techniques to multi-units auction, AdWords auction, and combinatorial auction.
Abraham Flaxman, David Gamarnik and Gregory Sorkin. First-passage percolation on a width-2 strip and the path cost in a VCG auction
Abstract: We study the time constant for first-passage percolation, and the Vickery-Clarke-Groves (VCG) payment for the shortest path, on a width-2 strip with random edge costs.  These statistics attempt to describe two seemingly unrelated phenomena, arising in physics and economics respectively:  the first-passage percolation time predicts how long it takes for a fluid to spread through a random medium, while the VCG payment for the shortest path is the cost maximizing social welfare among selfish agents.  However, our analyses of the two are quite similar, and require solving (slightly different) recursive distributional equations. Using Harris chains, we can characterize distributions, not just expectations.
Igal Milchtaich. The Equilibrium Existence Problem in Finite Network Congestion Games
Abstract: An open problem is presented regarding the existence of pure strategy Nash equilibrium (PNE) in network congestion games with a finite number of non-identical players, in which the strategy set of each player is the collection of all paths in a given network that link the player’s origin and destination vertices, and congestion increases the costs of edges. A network congestion game in which the players differ only in their origin–destination pairs is a potential game, which implies that, regardless of the exact functional form of the cost functions, it has a PNE. A PNE does not necessarily exist if (i) the dependence of the cost of each edge on the number of users is player- as well as edge-specific or (ii) the (possibly, edge-specific) cost is the same for all players but it is a function (not of the number but) of the total weight of the players using the edge, with each player having a different weight. In a parallel two-terminal network, in which the origin and the destination are the only vertices different edges have in common, a PNE always exists even if the players differ in either their cost functions or weights, but not in both. However, for general two-terminal networks this is not so. The problem is to characterize the class of all two-terminal network topologies for which the existence of a PNE is guaranteed even with player-specific costs, and the corresponding class for player-specific weights. Some progress is solving this problem is reported.
Carmine Ventre. Mechanisms with Verification for Any Finite Domain
Abstract: In this work we study \emph{mechanisms with verification}, as introduced by Nisan and Ronen [STOC~1999], to solve problems involving selfish agents. We provide a technique for designing \emph{truthful} mechanisms with verification that optimally solve the underlying optimization problem. Problems (optimally) solved with our technique belong to a rich class that includes, as special cases, \emph{utilitarian} problems and many others considered in literature for so called \emph{one-parameter} agents (e.g., the make-span studied by Archer and Tardos [STOC~2001]). Our technique extends the one recently presented by Auletta et al as it works for \emph{any} finite multi-dimensional valuation domain. No method was known to deal with any domain. As special case we obtain an alternative technique to optimally solve (though not in polynomial-time) Scheduling Unrelated Machines studied (and solved) by Nisan and Ronen. Interestingly enough, our technique also solves the case of \emph{compound} agents (i.e., agents declaring more than a value). No solution was known for designing mechanisms (with verification) for problems involving such general kind of agents. As an application we provide the \emph{first} optimal truthful mechanism with verification for Scheduling Unrelated Machines in which every selfish agent controls more than one (unrelated) machine. Again, this mechanism does not run in polynomial-time. We then turn our attention to truthful mechanisms approximating the optimization function. We provide methods leading to approximate solutions obtained with polynomial-time truthful mechanisms with verification. Using such methods we look for mechanisms, that use classical approximation algorithms, running in polynomial-time. Towards this aim we provide a technique that \emph{implements} any approximation algorithm essentially preserving its approximation ratio. This technique works for smooth problems involving compound agents composed by one-parameter valuations. We apply this technique to solve Scheduling Related Compound Machines problem (i.e, agents control more related machines). If the number of machines is constant then our solution runs in polynomial-time. Finally, we investigate the construction of mechanisms (with verification) for infinite domains. We show that existing techniques to obtain truthful mechanisms (for the case in which verification is not allowed), dealing with infinite domains, could completely annul advantages that verification implies. Besides one-parameter agents, no result (neither positive nor negative) was known on the design of mechanism with verification for infinite domains.
Paolo Penna, Guido Proietti and Peter Widmayer. Strongly Polynomial-Time Truthful Mechanisms in One Shot
Abstract: One of the main challenges in \emph{algorithmic   mechanism design} is to turn (existing) efficient algorithmic solutions    into   efficient \emph{truthful mechanisms}. Building a truthful mechanism is   indeed a difficult   process since the underlying  algorithm must obey certain   ``monotonicity'' properties and suitable \emph{payment functions}   need to be computed (this task usually represents the   bottleneck in the overall time complexity).    We provide a general technique for building   truthful mechanisms that provide optimal solutions   in \emph{strongly polynomial time}. We show that the \emph{entire}   mechanism can be obtained if one is able to  express/write   a strongly polynomial-time algorithm (for the corresponding optimization problem) as    a ``suitable combination'' of simpler algorithms. This approach applies to a wide class of \emph{mechanism design graph problems},  where each selfish agent corresponds to a weighted edge in a graph (the weight of the edge is the cost of using that edge). Our technique can be applied to several optimization problems which prior results cannot handle (e.g., MIN-MAX optimization problems).  As an application, we design the first (strongly polynomial-time) truthful mechanism for the \emph{minimum diameter spanning tree} problem, by obtaining it directly from an existing algorithm for solving this problem. For this non-utilitarian MIN-MAX problem, no truthful mechanism was known, even considering those running in exponential time (indeed,   exact algorithms do not necessarily yield truthful mechanisms). Also, standard techniques for payment computations may result in a running time  which is not polynomial in the size of the input graph. The overall running time of our mechanism, instead, is polynomial in the number $n$ of nodes and $m$ of edges, and it is only a factor $O(n\,\alpha(n,n))$ away from the best known canonical centralized algorithm.
Mukund Sundararajan, Shuchi Chawla and Tim Roughgarden. Optimal Cost-sharing Mechanisms for Steiner Forest Problems
Abstract: Kon\"emann, Leonardi, and Sch\"afer~\cite{KLS05} gave a 2-budget-bal\-anced and groupstrategyproof mechanism for Steiner forest cost-sharing problems. We prove that this mechanism also achieves an $O(\log^2 k)$-approximation of the social cost, where $k$ is the number of players. As a consequence, the KLS mechanism has the smallest-possible worst-case efficiency loss, up to constant factors, among all $O(1)$-budget-balanced Moulin mechanisms for such cost functions. We also extend our results to a more general network design problem.
Juliane Dunkel and Andreas S. Schulz. On the Complexity of Pure-Strategy Nash Equilibria in Congestion and Local-Effect Games
Abstract: Congestion games are a fundamental class of noncooperative games   possessing pure-strategy Nash equilibria. In the network version,   each player wants to send one unit of flow on a path from her origin   to her destination at minimum cost, and the cost of using an arc   only depends on the total number of players using that arc.  A   natural extension is to allow for players sending different amounts   of flow, which results in so-called weighted congestion games.   While examples have been exhibited showing that pure Nash equilibria   need not exist, we prove that it actually is NP-hard to determine   whether a given weighted network congestion game with convex delay   functions has a pure-strategy Nash equilibrium.  This is true   regardless of whether flow is unsplittable (has to be routed on a   single path for each player) or not.  A related family of compactly   encoded games are local-effect games where the utility of a player   taking a particular action depends on the number of players taking   the same action and the number of players choosing locally related   actions. We show that the problem of deciding whether a   bidirectional local-effect game has a pure strategy Nash equilibrium   is NP-complete, and that the problem of finding a pure strategy Nash   equilibrium in a bidirectional local-effect game with linear   local-effect functions (where the existence of a pure Nash   equilibrium is guaranteed) is PLS-complete. The latter proof uses a   tight PLS-reduction, which implies that the best-response dynamics   requires exponential time in the worst case.
Xi Chen, Li-Sha Huang and Shang-Hua Teng. Market Equilibria with Hybrid Linear-Leontief Utilities
Abstract: We introduce a new family of utility functions for exchange markets. This family provides a natural and "continuous" hybridization of the traditional linear and Leontief utilities and might be useful in understanding the complexity of computing and approximating of market equilibria. We show that a Fisher equilibrium of an exchange market with $M$ commodities and $n$ traders and hybrid linear-Leontief utilities can be found in $O(\sqrt{Mn}(M+n)^3L)$ time. Because this family of utility functions contains Leontief utility functions as special cases, finding an approximate Arrow-Debreu equilibria with hybrid linear-Leontief is \textbf{PPAD}-hard in general. In contrast, we show that,  when the Leontief components are grouped, finite and well-conditioned, we can efficiently compute an approximate Arrow-Debreu equilibrium.
Antoniy Ganchev, Lata Narayanan and Sunil Shende. Mechanisms to induce random choice
Abstract: Media access protocols in wireless networks require each contending node to wait for a backoff time chosen randomly from a fixed range, before attempting to transmit on a shared channel. However, nodes acting in their own selfish interest may not follow the protocol. In this paper, we use a mechanism design approach to study how nodes might be induced to adhere to the protocol. In particular, a static version of the problem is modeled as a strategic game (the protocol) played by non-cooperating, rational players (the nodes). We present a game which exhibits a unique mixed-strategy Nash equilibrium that corresponds to nodes choosing backoff times randomly from a given range of values, according to any apriori given distribution. We extend this result to the situation when each player can choose a backoff value from a different range, provided there are at least two players choosing from the  largest range.  In contrast, we show that if there are exactly two players with different backoff ranges, then it becomes impossible to design a strategic game with a unique such Nash equilibrium.  Finally, we show an impossibility result under certain natural limitations on the network authority.
Shuchi Chawla, Jason Hartline, Uday Rajan and R. Ravi. Optimal No-deficit Mechanism Design
Abstract: One of the most fundamental problems in mechanism design is that of designing the auction that gives the optimal profit to the auctioneer. For the case that the probability distributions on the valuations of the bidders are known and independent, Myerson (81) reduces the problem to that of maximixing the common welfare by considering the "virtual valuations" in place of the bidders' actual valuations. The Myerson auction maximizes the seller's profit over the class of all mechanisms that are truthful and individually rational for all the bidders; however, the mechanism does not satisfy ex post individual rationality for the seller. In other words, there are examples in which for certain sets of bidder valuations, the mechanism incurs a loss.  In this paper, we consider the problem of merging the worst case "no-deficit" (or individual rationality) condition with this average case Bayesian expected profit maximization problem.  When restricting our attention to ex post incentive compatible mechanisms for this problem, we find that the Myerson mechanism is the optimal no-deficit mechanism for supermodular costs, that Myerson merged with a simple thresholding mechanism is optimal for "all-or-nothing" costs, and that neither mechanism is optimal for general submodular costs.  Addressing the computational side of the problem, we show that for supermodular costs the Myerson mechanism is NP-hard to compute, and that for all-or-nothing costs the optimal thresholding mechanism is NP-hard to compute.  Finally, we consider relaxing the ex post incentive compatibility constraint and show that there is a Bayesian incentive compatible mechanism that achieves the same expected profit as Myerson, but never incurs a loss.
Nicole Immorlica, Robert Kleinberg and Mohammad Mahdian. Secretary problems with competing employers
Abstract: In many decentralized labor markets, job candidates are offered positions at very early stages in the hiring process.  It has been argued that these early offers are an effect of the competition between employers for the best candidate.  This work studies the timing of offers in a theoretical model based on the classical secretary problem.  We consider a secretary problem with multiple employers and study the equilibria of the induced game.  Our results confirm the observation of early offers in labor markets: for several classes of strategies based on optimal stopping theory, as the number of employers grows, the timing of the earliest offer decreases.
Burkhard Monien, Florian Schoppmann, Karsten Tiemann and Vladimir Mazalov. Wardrop Equilibria and Price of Stability for Bottleneck Games with Splittable Traffic
Abstract: We look at the scenario of having to route a continuous rate of traffic through a network, with the objective to maximize throughput. This is of interest, e.g., for providers of streaming content in communication networks. The overall path latency, which was relevant in other non-cooperative network routing games such as the classic Wardrop model, is of lesser concern here.  To that end, we define bottleneck games with splittable traffic where the throughput on a path is inversely proportional to the maximum latency of an edge on that very path---the bottleneck latency. Therefore, we define a Wardrop equilibrium as a traffic distribution where this bottleneck latency is at minimum on all used paths. As a measure for the overall system well-being---called social cost---we take the weighted sum of the bottleneck latencies of all paths.  Our main findings are as follows: First, we prove social cost of Wardrop equilibria on series parallel graphs to be unique. Even more, for any graph whose subgraph induced by all simple start-destination paths is not series parallel, there exist games having equilibria with different social cost. For the price of stability, we give an independence result with regard to the network topology. Finally, our main result is giving a new exact price of stability for Wardrop/bottleneck games on parallel links with M/M/1 latency functions. This result is at the same time the exact price of stability for bottleneck games on general graphs.
Theodore Komninos, Yannis Stamatiou and George Vavitsas. A worm propagation model based on people's email acquaintance profiles
Abstract: One frequently employed way of propagation exploited by worms is through the victim's contact book. The contact book, which reflects the acquaintance profiles of people, is used as a ``hit-list'', to which the worm can send itself in order to spread fast. In this paper we propose a discrete worm propagation model that relies upon a combined email and Instant Messaging (IM) communication behaviour of users. We also model user reaction against infected email as well as the rate at which antivirus software is installed. User acquaintance is perceived as a ``network'' connecting users based on their contact book links. We subsequently propose a rigorous worm propagation formulation utilising a suitably defined token propagation algorithm, further analyzed with a use of a system of continuous differential equations, as dictated by Wormald's theorem on approximating ``well-behaving'' random processes with deterministic functions.
Constantinos Daskalakis, Aranyak Mehta and Christos Papadimitriou. A Note on Approximate Nash Equilibria
Abstract: In view of the intractability of finding a Nash equilibrium, it is   important to understand the limits of approximation in this context.   A subexponential approximation scheme is known \cite{LMM}, and no   approximation better than $1\over 4$ is possible by any algorithm   that examines equilibria involving fewer than $\log n$ strategies \cite{althofer}.   We give a simple, linear-time algorithm examining just two   strategies per player and resulting in a $1\over 2$-approximate Nash   equilibrium in any 2-player game.  For the more demanding notion of   {\em well-supported approximate equilibrium} due to \cite{DGP} no   nontrivial bound is known; we show that the problem can be reduced   to the case of win-lose games (games with all utilities $0-1$), and   that an approximation of $5\over 6$ is possible contingent upon a   graph-theoretic conjecture.


Site maintenance: Spyros Kontogiannis. Last update: 01/08/2006.