May 28, 2008 - properties of revenue$maximizing mechanisms in a variety of ... of multiperiod procurement auctions for bidders whose cost ... The fund...

0 downloads 0 Views 454KB Size

Ilya Segal

Juuso Toikka

Northwestern UNiversity

Stanford University

Stanford University

May 28, 2008

Abstract This paper examines the problem of how to design incentive-compatible mechanisms in environments in which the agents’ private information evolves stochastically over time and in which decisions have to be made in each period. The environments we consider are fairly general in that the agents’types are allowed to evolve in a non-Markov way, decisions are allowed to a¤ect the type distributions and payo¤s are not restricted to be separable over time. Our …rst result is the characterization of a dynamic formula for (the derivative of) the agents’equilibrium payo¤s in an incentive-compatible mechanism. The formula summarizes all local …rst-order conditions taking into account how current types a¤ect the dynamics of expected payo¤s. The formula generalizes the familiar envelope condition from static mechanism design: the key di¤erence is that a variation in the current types now impacts payo¤s in all subsequent periods both directly and through the e¤ect on the distributions of future types. We …rst identify assumptions on the primitive environment that guarantee that our dynamic payo¤ formula is a necessary condition for incentive compatibility. Next, we specialize this formula to quasi-linear environments and use it to establish a dynamic revenue-equivalence result. Lastly, we turn to the characterization of su¢ cient conditions for incentive compatibility. We then apply the results to study the properties of revenue-maximizing mechanisms in a variety of applications that include dynamic auctions with AR(k) values and the provision of experience goods. JEL Classi…cation Numbers: D82, C73, L1. Keywords: dynamic mechanisms, asymmetric information, stochastic processes, long-term contracting, incentives This paper supercedes previous working papers “Revenue Equivalence, Pro…t Maximization, and Transparency in Dynamic Mechanisms” by Segal and Toikka and “Long-Term Contracting in a Changing World” by Pavan. Acknowledgements to be added.

1

Introduction

We consider the problem of how to design incentive-compatible mechanisms in a dynamic environment in which agents receive private information over time and decisions may be made over time. The model allows for serial correlation of the agents’private information as well as the dependence of information on past allocations. For example, it covers as special cases such problems as the allocation of resources to agents whose valuations follow a stochastic process, the procedures for selling new experience goods whose value is re…ned by the buyers upon consumption, or the design of multiperiod procurement auctions for bidders whose cost parameters evolve stochastically over time and may exhibit learning-by-doing e¤ects. The fundamental di¤erence between dynamic and static mechanism design is that in the former, an agent has access to a lot more potential deviations. Namely, instead of a simple misrepresentation of his true type, the agent can make this representation conditional on the information he has observed in the mechanism, in particular on his past types, his past reports (which need not have been truthful), and what he inferred about the other agents’ types in the course of the mechanism. Despite the resulting complications, we deliver some general necessary conditions for incentive compatibility and some su¢ cient conditions, and use them to characterize pro…tmaximizing mechanisms in several applications. The cornerstone of our analysis is the derivation of a formula for the derivative of an agent’s expected payo¤ in an incentive-compatible mechanism with respect to his private information. Similarly to Mirrlees’s …rst-order approach for static environments (Mirrlees, 1971), our formula (hereafter referred to as dynamic payo¤ formula) provides an envelope-theorem condition summarizing local incentive compatibility constraints. In contrast to the static model, however, the derivation of this formula relies on incentive compatibility in all the future periods, not just in one given period. Furthermore, unlike some of the earlier papers about dynamic mechanism design, we identify conditions on the primitive environment for which the dynamic payo¤ formula is a necessary condition for any incentive-compatible mechanism (not just for “well-behaved” ones). In addition to carrying over the usual static assumptions of “smoothness” of the agent’s payo¤ function in his type and connectedness of the type space (see, e.g., Milgrom and Segal, 2002), the dynamic setting requires additional assumptions on the stochastic process governing the evolution of each agent’s information. Intuitively, our dynamic payo¤ formula represents the impact of an (in…nitesimal) change in the agent’s current type on his equilibrium expected payo¤. This change can be decomposed into two parts. The …rst one is the familiar e¤ect of the current type on the agent’s expected utility, as in static mechanism design. The second part captures the indirect e¤ect of the current type on the expected utility through its impact on the type distributions in each of the subsequent periods. Note that in general the current type may a¤ect the future type distributions 1

directly as well as indirectly through its impact on the type distributions in intermediate periods. All changes in the type distributions are then evaluated by looking at their ultimate impact on the agent’s utility, holding constant the agent’s messages to the mechanism (by the usual envelope theorem logic). The dynamic payo¤ formula can be established either by iterating backward the local incentivecompatibility conditions or by using the quantile function theorem (see, e.g., Angus, 1994) to represent the agents’ types as the result of independent innovations (shocks). While the two approaches lead to the same formula, the conditions on the primitive environment that validate this formula a necessary condition for incentive compatibility are somewhat di¤erent. In this sense the two approaches are complementary (see also Eso and Szentes, 2007, for a similar approach in a two-period-one-decision model). To ease the exposition, in the …rst part of the paper (Section 3) we consider an environment with a single agent who observes all the relevant history of the mechanism. There we derive the envelope formula that determines the agent’s equilibrium payo¤ in a incentive-compatible mechanism. In Section 4 we then show how to adapt the envelope formula to a multi-agent environment. The key di¤erence between the two settings is that in the latter an agent observes only a part the entire history generated by the mechanism: an agent must thus form beliefs about the unobserved types of the other agents as well as the decisions that the mechanism has induces with these agents. We show that the derivation for the single-agent case extends to multi-agent mechanisms provided that the stochastic processes governing the evolution of the agents’ types are independent of one another, except through their e¤ect on the decisions that are observed by the agents. In other words, we show how the familiar “Independent Types” assumption for static mechanism design should be properly adjusted to a dynamic setting to guarantee that the agents’equilibrium payo¤s can still be pinned down by an envelope formula. For the special case of quasilinear environments, we …rst use the dynamic envelope formula to establish a dynamic “revenue equivalence theorem” that links the payment rules in any two Bayesian incentive-compatible mechanisms that implement the same allocation rule. In particular, if we have a single agent who participates in a deterministic mechanism, this theorem pins down, in each state, the total payment that is necessary to implement a given allocation rule, up to a scalar that does not depend on the state. With many agents, or with a stochastic mechanism, the theorem pins down the expected payments as function of each agent’s type history, where the expectation is with respect to the other agents’types and/or the stochastic decisions taken by the mechanism. However, if one requires a strong form of “robustness”— according to which the mechanism must remain incentive-compatible even if an agent is shown at the very beginning of the game all the other agents’(future) types— then the theorem again pins down the total payments for each state.

2

Next, we use the dynamic envelope formula to express the expected pro…ts in an incentivecompatible and individually rational mechanism as the expected “virtual surplus,” appropriately de…ned for the dynamic setting. This derivation uses only the agents’local incentive constraints, and only the participation constraints of the lowest-types in the initial period. Ignoring all the other incentive and participation constraints yields the dynamic “Relaxed Program,” which is in general a dynamic programming problem. In particular, the Relaxed Program gives us a simple intuition for the optimal distortions introduced by a pro…t-maximizing principal: Since only the …rst-period participation constraints bind (this is due to the unlimited bonding possibilities in the quasilinear environment with unbounded transfers), the distortions are created to balance the rentextraction versus e¢ ciency trade-o¤, as perceived from the perspective of period one. However, due to informational linkages in the stochastic type process, the principal will not only distort the agent’s consumption in period one but also in any subsequent period whenever his type in period t is informative about the …rst-period type. The informativeness is here measured by an “information index ”that captures all the direct and indirect e¤ects of the …rst-period type on the type distributions in all subsequent periods. It turns out that when an agent’s type in period t > 1 hits its highest or lowest possible value, the informational linkage disappears and the principal implements the e¢ cient level of consumption in that period (provided that payo¤s are additively time-separable). However, for intermediate types in period t, the optimal mechanism entails distortions (for example, when types are positively correlated over time in the sense of First-Order Stochastic Dominance, and the agent’s payo¤s satisfy the single-crossing property, the optimal mechanism entails downward distortions). Thus, in contrast to the static model, with a continuous but bounded type space, distortions in each period t > 1 are never monotonic in the agent’s type. This is also in contrast with the results of Battaglini (2005) for the case of a Markov process with only two types in each period. Studying the Relaxed Program is not fully satisfactory unless one also provides su¢ cient conditions for its solution to satisfy all of the remaining incentive and participation constraints. We are indeed able to provide some such conditions. In particular, we show that in the case where the agents’ types follow a Markov process and their payo¤s are Markovian in their types (so that it is enough to check one-stage deviations from truthtelling), a su¢ cient condition for an allocation rule to be implementable is that the partial derivative of the agent’s expected utility with respect to his current type when he misreports be nondecreasing in the report. One can then use the dynamic payo¤ formula to calculate this partial derivative— the condition is fairly easy to check. (Unfortunately, this condition is not necessary for incentive-compatibility— a tight characterization is evasive because of the multidimensional decision space of the problem.) This su¢ cient condition also turns useful when checking incentive compatibility is some non-Markov settings that are

3

su¢ ciently “separable.” In some standard settings we can actually state an even simpler su¢ cient condition for incentive compatibility, which also ensures that incentive compatibility is robust to an agent learning in advance all of the other agents’types (and therefore to any weaker form of information leakage in the mechanism). This condition is that the transitions that describe the evolution of the agents’ private information are monotone in the sense of First-Order Dominance, the payo¤s satisfy a singlecrossing property, and the allocation rule is “strongly monotonic”in the sense that the consumption of a given agent in any period is nondecreasing in each of the agent’s type reports, for any given pro…le of reports by the other agents. In Section 5, we apply the general results to a few simple, yet illuminating, applications. The analysis proves especially simple when the agents’types follow an autoregressive stochastic process of degree k (AR(k)). If we assume in addition that each agent’s payo¤ is a¢ ne in his types (but not necessarily in his consumption), then the principal’s Relaxed Program turns out to be very similar to the expected social surplus maximization program, the only di¤erence being that the agents’ true values in each period are replaced by their corresponding “virtual values.” In the AR(k) case, the di¤erence between an agent’s true value and his virtual value in period t, which can be called his “handicap” in period t, is determined by the agent’s …rst-period type, the hazard rate of the …rst period type’s distribution, and the “impulse response coe¢ cient” of the AR(k) process.1 Intuitively, the impulse response coe¢ cient determines the informational link between period t and period 1, while the …rst-period hazard rate captures the importance that the principal assigns to the trade-o¤ between e¢ ciency and rent-extraction as perceived from period one’s perspective (just as in the static model). Importantly, since the handicaps depend only on the …rst-period type reports, the Relaxed Program at any period t

2 can be solved by running

an e¢ cient (i.e., expected surplus-maximizing) mechanism on the handicapped values. Thus, while building an e¢ cient mechanism may in general require solving an involved dynamic programming problem (due to possible intertemporal payo¤ interactions), once a solution is found it can be easily adapted to obtain a solution to the Relaxed Program. We also use the fact that the solution to the Relaxed Program looks “quasi-e¢ cient”from period 2 onward to show that it can be implemented in a mechanism that is incentive compatible from period 2 onward (following truthtelling in period one). This can be done for example using the “Team Mechanism” payments proposed by Athey and Segal (2007) to implement e¢ cient allocation rules. As for verifying incentives in period 1, we have only been able to do it in a few special settings. We also consider two other applications. The …rst one is the designing of sequential auctions for environments in which the agents’payo¤s are time-separable while their private types follow an 1

The term “handicapped auction” was …rst used in Eso and Szentes (2007).

4

AR(k) process. This setting is particularly simple because the Relaxed Program separates across periods and states and so we do not need to solve a dynamic programming problem. Under the standard monotone hazard rate assumption on the agents’initial type distribution and the standard third-derivative assumption on their utility functions, the Relaxed Program is solved by a Strongly Monotone allocation rule, which then implies that it is implementable in an incentive-compatible mechanism (and one that is robust to information leakage). The optimal mechanism exhibits some interesting properties: for example, an agent’s consumption in a given period depends only on his initial report and his current report, but not on intermediate reports. This can be interpreted as a scheme where the agents make up-front payments that reduce their future distortions. The second application is one in which an agent receives a signal about his unknown valuation for a new good each time he consumes it. The agent’s expected value for the good then follows a martingale. The solution to the e¢ cient dynamic programming problem in this setting takes the form of a stopping rule. The solution to the pro…t-maximization problem looks similar, except that the agent again makes a …rst-period report that determines his up-front payment and his subsequent handicaps. This optimal mechanism achieves a strictly higher expected pro…t than any pricing policy, even a history-contingent one. The rest of the paper is organized as follows. Section 2 discusses the related literature. Section 3 presents the results for the single-agent case. Section 4 extends the analysis to quasi-linear settings with multiple agents. Section 5 presents a few applications. Section 5.3 contains proofs omitted in the main text.

2

Related Literature2

The last few years have witnessed a fast-growing literature on dynamic mechanism design. A number of papers propose mechanisms for implementing e¢ cient (pro…t-maximizing) mechanism that are the dynamic analogues of static VCG and expected-externality mechanisms (see, for example, Athey and Segal (2007) and Bergemann and Valimaki, 2008, and the references therein). These papers do not characterize incentive compatibility, but provide some mechanisms that turn out to be incentive-compatible. Our analysis is more closely related to the pioneering work of Baron and Besanko (1984) on regulation of a natural monopoly and the more recent paper of Courty and Li (2000) on advance ticket sales. Both papers consider a two-period model with one agent and use the …rst-order approach to derive optimal mechanisms. The agent’s types in the two periods are serially correlated 2 This section is still very much incomplete. We apologize to the many authors who feel that their work should have been discussed and that we omitted here.

5

and this correlation determines the distortions in the optimal mechanism. Courty and Li also provide some su¢ cient conditions for the allocation rule to be implementable. Our paper builds on the ideas in these papers but extends the approach to allow for multiple periods, multiple agents, and for more general speci…cation of the payo¤ and information structure. Contrary to these early papers, we also provide conditions on the primitive environment that validate the “…rst-order approach.” Related is also a more recent paper by Battaglini (2005) who considers a model with one agent and two types and derives an optimal selling mechanism for a monopolist facing a consumer whose type follows a Markov process. Our results for a model with continuous types indicate that many of his predictions seem speci…c to the special setting with only two types. We discuss in more detail the di¤erences between the results in the two papers in subsection 4.6.3 Gershkov and Moldovanu (2007) consider both e¢ cient and pro…t maximizing mechanisms to allocate a …xed set of objects to buyers that arrive randomly over time. While the model has multiple agents, they assume that each agent lives only instantaneously. Hence the problem that each agent faces is actually static. The paper derives a payo¤-equivalence result which is essentially a static payo¤ equivalence result applied separately to each short lived agent. In contrast, we allow the agents to be long lived.4 Eso and Szentes (2007) consider a two-period model with many agents but with a single decision in the second period. They propose a di¤erent approach than that in Baron and Besanko (1984) and Courty and Li (2000) to the characterization of optimal mechanisms. Their approach consists in using the Probability Integral Transform Theorem to represent an agent’s second-period type as a function of his …rst-period type and a random shock that is independent of the …rst-period type. In Section 3.3 we show how the Probability Integral Transform Theorem can be used recursively in a setting with more than two periods to describe the entire stochastic process that governs the evolution of the agents’ private information by means of serially independent shocks. We then show how the independent-shock representation can be used to derive our dynamic payo¤ formula under a somewhat di¤erent set of assumptions. Eso and Szentes also derive a pro…t-maximizing auction and coin the term “handicapped auction” to describe it. However, in their two-period AR(1) setting, it turns out that any incentive-compatible mechanism, not just a pro…t-maximizing one, can be viewed as a “handicapped auction.” What we …nd more surprising is that under the special assumptions of an AR(k) type process and a¢ ne payo¤s, then even with many periods the optimal mechanism remains an “handicapped mechanism.” The distinguishing feature of such 3

See also our companion paper “On the Dynamics of Distotions in Long-term Contracting,”for a further discussion. Other recent papers that study dynamic pro…t-maximizing mechanisms include Bognar, Börgers, and Meyerter-Vehn, 2008, and Zhang, 2008. The key di¤erence between these papers and ours is that these papers look at particular issues that can emerge in dynamic environments, such as costly participation, while our abstracts from some of these issues but instead provides a more general characterization of incentive-compatibility. 4

6

mechanisms is that the allocation in a given period depends only on that period’s reports and the reports in the …rst period; it is thus independent of the reports in all intermediate periods.5 The paper is also related to a more “macroish” literature on optimal dynamic taxation. While the early literature typically assumes i.i.d. shocks (e.g. Green 1987, Thomas and Worrall, 1990, Atkeson and Lucas, 1992), the more recent literature considers the case of persistent private information (e.g. Fernandes and Phelan, 2000, Golosov, Kocherlakota, and Tsyvinski, 2003, Kocherlakota, 2005, Golosov and Tsyvinski, 2006, Kapicka, 2008, Tchistyi, 2006, Biais, Mariotti, Plantin, and Rochet, 2007, Zhang, 2007, Williams, 2008). While our work shares several modelling assumptions with some of the papers in this literature, its key distinctive aspect is the general characterization of incentive compatibility as opposed to the feautures of the optimal mechanism in the contest of speci…c applications.6 Dynamic mechanism design is also inherently related to the literature on multidimensional screening, as noted, e.g., in Rochet and Stole (2003). Indeed, it is the multidimensional nature of the problem that prevents a complete characterization of all implementable allocation rules. Nevertheless, there is a sense in which incentive compatibility is much easier to ensure in a dynamic mechanism than in a static multidimensional mechanism. This is because in a dynamic environment an agent is asked to report each dimension of his private information before learning the subsequent dimensions. By implication there are fewer deviations than in the corresponding static environment in which the agents observe all the dimensions at once. Because of this, the set of allocation rules that are implementable in a dynamic environment proves to be signi…cantly larger than the set of allocation rules that are implementable in the corresponding static multidimensional environment. For example, the pro…t-maximizing dynamic allocation rules we characterize are typically not implementable if the agents were to observe all of their private information at the outset of the mechanism. We also touch here upon the issue of transparency in mechanisms. Calzolari and Pavan (2005, 2006) study its role in environments in which downstream actions (e.g. resale o¤ers in secondary markets, or more generally contract o¤ers in sequential common agency) are not contractable upstream. Pancs (2007) also studies the role of transparency in environments where agents take nonenforceable actions such as investment or information acquisition. 5 Another key di¤erence between the two papers is that, while Eso and Szentes use their model to study primarily the e¤ects of the seller’s information disclosures on surplus extraction, here we focus on the characterization of incentive compatibility in general dynamic mechanisms. For this purpose, it is essential to allow for non-Markov processes and non-time-separable preferences, and to permit decisions to a¤ect the type distributions. 6 Some of the works in this literature limit the analysis to the characterization of local …rst-order conditions (e.g. the inverse Euler equation) and either leave the dynamics of the optimal mechanism unspeci…ed or they solve it numerically.

7

3

Single-agent case

3.1

General setup

3.1.1

The Environment

We consider an environment with one agent and …nitely many periods, indexed by t = 1; 2; : : : ; T . In each period t there is a contractible decision yt 2 Yt , whose outcome is observed by the agent. (In the next section we apply the model to a more general setup where yt is the part of the decision

taken in period t that is observed by the agent.) Each Yt is assumed to be a measurable space with Qt 7 the sigma-algebra left implicit. The set of all period-t decision histories is denoted Y t =1 Y . Y T.

For the full histories we drop the superscripts so that y is an element of Y

Before the period t decision is taken, the agent privately observes his current type ( t; t)

R where

1

+1. We implicitly endow the set

t

t

algebra. The set of possible type histories at period t is denoted by T

of

is referred to as the agent’s type.

The distribution of the current type (

t 1

; yt 1)

2

t 1

Y

t 1.

t

1

2

t

t with the Borel sigmaQt . An element =1

may depend on the entire history of types and decisions

In particular, we assume that the distribution of

history-dependent probability measure (“kernel”) Ft j Yt

t

t

8 t.

! R is measurable for all measurable A

t 1

; yt

1

on

t

t

is governed by a

such that Ft (Aj ) :

Note that the distribution of

t

t 1

depends only

on variables observed by the agent. We denote the collection of kernels by F where for any measurable set A, notation by using Ft ( j

t 1

; yt

sponding to the measure Ft ( j

Ft :

t 1

Yt

1

!

(

T t ) t=1 ;

(A) denotes the set of probability measures on A. We abuse

1)

to denote also the cumulative distribution function (cdf) corre-

t 1

; yt

1 ).

The agent is a von Neumann-Morgenstern decision maker whose preferences over lotteries over Y are represented by the expectation of a (measurable) Bernoulli utility function U:

Y ! R:

(Although some form of time separability of U is typically assumed in applications, it is not needed for the general results.) An often encountered special case in applications is one where private information evolves in a 7 By convention, all products of measurable spaces encountered in the text are endowed with the product sigmaalgebra. 8 Throughout, we adopt the convention that for any set A, A0 f?g.

8

Markovian fashion, and where the agent’s payo¤ is Markovian in the following sense De…nition 1 The environment is Markov if t 1

1. for all t, and all (

; yt

1)

2. there exists functions At : ( ; y) 2

2

t 1

t

Y t ! R++

Y,

U ( ; y) =

Yt

T X t=1

1,

t 1 Y

Ft ( j T 1 t=1

t 1

; yt

1)

and Bt :

t

!

A ( ; y ) Bt

=1

t 2

does not depend on Yt !R t; y

t

T t=1

, and

such that for all

:

(1) t

Condition (1) ensures that in any given period t after observing history

; y t , the agent’s

von Neumann-Morgenstern preferences over future lotteries depend on his type history through the current type (At

t

; yt

t.

t

only

In particular, it encompasses the case of additive separable preferences

= 1 for all t) as well as the case of multiplicative separable preferences (Bt

t; y

t

=0

for all t < T ). 3.1.2

Mechanisms

A mechanism in the above environment assigns a set of possible messages to the agent in each period. The agent sends a message from this set and the mechanism responds with a (possibly randomized) decision that may depend on the entire history of messages sent up to period t, and on past decisions. By the Revelation Principle (adapted from Myerson, 1986), for any standard solution concept, any distribution on

Y that can be induced as an equilibrium outcome in any

mechanism can be induced as an equilibrium outcome of a “direct mechanism” in which the agent is asked to report the current type in each period, and in equilibrium he reports truthfully. Let mt 2

t

denote the agent’s period-t message, and let mt

(m1 ; : : : mt ).

De…nition 2 A direct mechanism is a collection t

such that for all t, and all measurable A (The notation given history

t t 1) t (Ajm ; y

(mt ; y t 1 )

2

t

:

t

Yt ,

Yt

1

!

(Yt )

t (Aj

):

t

T t=1

Yt

1

! [0; 1] is measurable.

stands for the probability of the mechanism generating yt 2 A

Y

Yt

t 1 .)

Given a direct mechanism , and a history (

t 1

; mt

1; yt 1)

sequence of events takes place in each period t: 1. The agent privately observes his current type 9

t

2

t

2

t 1

t 1

Yt

1,

drawn according to Ft j

the following

t 1

; yt

1

.

2. The agent sends a message mt 2

t.

3. The mechanism selects a decision yt 2 Yt according to

t(

jmt ; y t

1 ).

A (pure) strategy for the agent in a direct mechanism is thus a collection of measurable functions t

De…nition 3 A strategy

:

t

t 1

Yt

1

is truthful if for all t and all (( t ((

t 1

; t ); mt

1

; yt

1

T t t=1 :

! t 1

)=

; t ); mt

1; yt 1)

2

t

t 1

Yt

1,

t:

This de…nition de…nes a unique strategy that requires the agent to report his current type truthfully following all histories, including non-truthful ones. In order to describe expected payo¤s, it is convenient to develop some more notation. First we de…ne histories. For all t = 0; 1; : : : ; T , let t

Ht

t 1

Yt

1

where by convention H0 = f?g, and H1 =

t

[ 1

t

[(

Yt

1

1) [ (

1

[ 1

t

t

1

Yt ;

Y1 ). Then Ht is the set of

all histories terminating within period t, and, accordingly, any h 2 Ht is referred to as a period-t history. We let

H

T [

Ht

t=0 j

denote the set of all histories. A history ( s ; mt ; y u ) 2 H is a successor to history (^ ; m ^ k ; y^l ) 2 H j if (1) (s; t; u) (j; k; l), and (2) ( j ; mk ; y l ) = (^ ; m ^ k ; y^l ). A history h = ( s ; mt ; y u ) 2 H is a

truthful history if

t

= mt .

Fix a direct mechanism

, a strategy , and a history h 2 H. Let [ ; ]jh denote the (unique)

probability measure on

Y — the product space of types, messages, and decisions— induced

by assuming that following history h in mechanism s

; mt ; y u ).

, the agent follows strategy in the future. Then [ ; ]jh assigns probability one to (~; m; ~ y~) such that

More precisely, let h = ( s (~ ; m ~ t ; y~u ) = ( s ; mt ; y u ). Its behavior on

Y is otherwise induced by the stochastic process

that starts in period s with history h, and whose transitions are determined by the strategy mechanism

,

, and kernels F . If h is the null history we then simply write [ ; ]. We also adopt

the convention of omitting

from the arguments of

when

is the truthful strategy. Thus [ ]

is the ex-ante measure induced by truthtelling while [ ]jh is the measure induced by the truthful strategy following history h.

10

Given this notation, we write the agent’s expected payo¤ when following history h he plays according to strategy in the future as E [ ; ]jh [U (~; y~)].9 For most of the results we use ex-ante rationality as our solution concept. That is, we require the agent’s strategy to be optimal when evaluated at date zero, before learning

1.

In a direct

mechanism this corresponds to ex-ante incentive compatibility de…ned as follows. De…nition 4 A direct mechanism

is ex-ante incentive compatible (ex-ante IC) if for all strategies

,10 E

[ ]

[U (~; y~)]

E

[ ; ]

[U (~; y~)]:

This notion of IC is arguably the weakest for a dynamic environment. Thus deriving necessary conditions for this notion gives the strongest results. However, for certain results it is convenient to de…ne IC at a given history. De…nition 5 Given a direct mechanism such that for all h 2 H,

, the agent’s value function is a mapping V

V (h) = sup E

De…nition 6 Let h 2 H. A direct mechanism E In words,

[ ]jh

[ ; ]jh

:H!R

[U (~; y~)]:

is incentive compatible at history h (IC at h) if

[U (~; y~)] = V (h):

is IC at h if truthful reporting in the future maximizes the agent’s expected

continuation payo¤ following history h. This de…nition is ‡exible in that it allows us to generate di¤erent notions of IC as special cases by requiring IC at all histories in a particular subset. For example, ex-ante IC is equivalent to requiring IC only at the null history. Or in a static model (i.e., if T = 1), the standard de…nition of interim incentive compatibility obtains by requiring

to be IC

at all histories where the agent knows only his type. In a dynamic model a natural alternative is to require that if the agent has been truthful in the past, he …nds it optimal to continue to report truthfully. This is obtained by requiring

to be IC at all truthful histories.

The Principle of Optimality implies the following lemma. Lemma 1 If

is IC at h, then for [ ]jh-almost all successors h0 to h,

9

is IC at h0 .

Throughout we use “tildes” to denote random variables with the same symbol without the tilde corresponding to a particular realization. 10 Restricting attention to pure strategies is without loss: By the Revelation Principle the agent can be assumed to follow the truthful pure strategy in equilibrium. As for deviations, a mixed strategy (or a collection of behavioral strategies) induces a lottery over payo¤s from pure strategies. Thus, if there is a pro…table deviation to a mixed strategy, then there is also a pro…table deviation to a pure strategy in the support of the mixed strategy.

11

In particular, if

is ex-ante IC , then truthtelling is also sequentially optimal at truthful future

histories h with probability one, and the agent’s equilibrium payo¤ at such histories is given by V (h) with probability one. We will sometimes …nd it convenient to focus on such histories, and they are the only ones that matter for ex-ante expectations.

3.2

Necessary Conditions for IC: Recursive Approach

3.2.1

Backward-Induction Formula

We now set out to derive a recursive formula for (the derivative of) the agent’s expected payo¤ in an incentive compatible mechanism. This formula extends to dynamic models the standard use of the envelope theorem in static models to pin down the dependence of the agent’s equilibrium utility on his true type (see, e.g., Milgrom and Segal, 2002). We begin with a heuristic derivation of the result. First recall the standard approach with T = 1, which expresses the derivative of the agent’s equilibrium payo¤ in an IC mechanism with respect to his type as the partial derivative of his utility function with respect to the true type holding the truthful equilibrium message …xed: dV ( 1 ) = d 1

Z

Y1

@U ( 1 ; y1 ) d @ 1

1 (y1 j 1 )

=E

[ ]j

1

2 4

@U ~1 ; y~1 @

1

3

5:

(For the moment we ignore the precise conditions for the argument to be valid.). With T > 1, we may be interested in evaluating the equilibrium payo¤ starting from any period t. In general, the agent’s continuation utility from truthtelling following a truthful history h = ( t; E Z

[ ]jh

t 1

h

; yt

1)

U ~; y~

is

i

U ( ; y) dFT +1

where dFT +1 (

T +1 j

= T +1 j T

T

; yT )

; yT d

T

yT jmT ; y T

1

dFt+1

t+1 j

t

; yt d

t

yt jmt ; y t

1

; m=

1. Assume for the moment that this expression is su¢ ciently well-

behaved so that the derivatives encountered below exist. Suppose one now replicates the argument from the static case. That is, consider the agent’s problem of choosing a continuation strategy given truthful history ( t ;

t 1

; yt

1 ).

Assuming that an envelope argument applies, we di¤erentiate with

respect to the agent’s current type

t

holding the agent’s truthful future messages …xed. The current

type directly enters the payo¤ in two ways. First, it enters the agent’s utility function U . This gives the term E [ ]jh [@U (~; y~)[email protected] t ]. Second, it enters the kernels F . This gives (after integrating

12

by parts and di¤erentiating within the integral) for each

E

[ ]jh

"Z

1

@F ( j~ @

1

((~

1 ) @V

; y~

> t the term 1

); ~

;

; y~

1)

@

t

#

d

:

This suggests that a marginal change in the current type e¤ects the equilibrium payo¤ through two di¤erent channels. First, it changes the agent’s payo¤ from any allocation. Second, it changes the distribution of future types in all periods

> t, and hence leads to a change in the period-

continuation utility captured by the derivative of the value function V

evaluated at the appropriate

history. While the above heuristic derivation isolates the e¤ects of the current type on the agent’s equilibrium payo¤, it does not address the technical conditions for the derivation to be valid. In fact, in general the derivatives of the future value function can not be assumed to exist so that the actual formal argument is more involved. In particular, we do not want to impose any restriction on the mechanism

to guarantee di¤erentiability of the value function. This would clearly be restrictive,

for example, for the purposes of deriving implications for optimal mechanisms. Instead, we seek to identify properties of the environment that guarantee that the value function is su¢ ciently well behaved. Our derivation makes use of the following key assumptions. Assumption 1 For all t,

t

= ( t; t)

Assumption 2 For all t, and all ( Assumption 3 For all t, and all ( increasing on

R for some

t 1

; yt

t 1

1)

; yt

t 1

2

1)

Assumption 4 For all t, and all ( ; y) 2

t

t

Yt

t 1

2

t.

1 1,

Yt

Y , @U ( ; y)[email protected]

R

j t jdFt ( t j

1,

t

+1. t 1

the cdf Ft ( j

; yt

t 1

1)

< +1.

; yt

1)

is strictly

exists and is bounded uniformly in

( ; y).

Assumption 5 For all t, all

< t, and all ( t ; y t

@Ft ( t j

t 1

the probability measure Ft j

t 1

Furthermore, for all t, there exists integrable Bt : t

( ; yt

1)

2

t

Yt

11

t 1

t

1, t 1

; yt

2Yt

1,

@Ft ( t j

Assumption 6 For all t, and all y t ous in

1)

1

in the total variation metric.11

1

)[email protected]

2

t

Yt

1,

1 )[email protected]

! R [ f+1g such that for all

exists.

< t, and all

Bt ( t ):

See, e.g., Stokey and Lucas (1989) for the de…nition of the total variation norm.

13

; yt

; yt

1

is continu-

Assumptions 1 and 4 are familiar from static settings (see, e.g., Milgrom and Segal, 2002). Note, however, that we do not require that the set of types be bounded. Assumptions 2 and 3 are also typically made in static models. Assumption 2 about the existence of the expectation is trivially satis…ed if

t

is bounded. Assumption 3 is a full support assumption, which is related

to Assumption 1. While Assumption 1 requires that the set

t

of all feasible types be connected,

Assumption 3 implies that the set of relevant types is a connected set.12 Assumption 5 requires that the distribution of the current type depend su¢ ciently smoothly on past types. The motivation for it is essentially the same as for requiring that, even in static settings, utility depends smoothly on types (i.e., Assumption 4). In a dynamic model the agent’s expected payo¤ depends on his true type both through the utility function U and the kernels F . For the expected payo¤ to depend smoothly on types, both U and F need to have this property.13 Since this assumption does not have an immediate counterpart in the static model, it is instructive to consider what restrictions it imposes on the stochastic process for

t.

In particular, it implies that the partial

derivative of the expected current type with respect to any past type and is bounded uniformly in (

t 1

; y t 1 )—

,

@ @

see Lemma A1 in the Appendix.

E[ t j

t 1

; yt

1 ],

exists

It turns out that for non-Markov models Assumption 5 by itself does not impose enough regularity on the dependence of the kernels on past types, and hence we impose also Assumption 6. We are now ready to state our …rst main result. Proposition 1 Suppose Assumptions 1-6 hold. (In the Markov case, Assumption 6 can be disis IC at the truthful history ht

pensed with.) If V ( t ; ht

1

@V ( t ; ht @ t E

) is Lipschitz continuous in 1)

[ ]j( t ;ht

= "

1)

@U (~; y~) @ t

T X

=t+1

Z

t,

t 1

1

and for a.e.

@F ( j~ @

1 t

; y~

;

t 1

; yt

1

, then

t,

1 ) @V

((~

1

); ~

; @

1

; y~

1)

d

(IC-FOC) # :

The recursive formula (IC-FOC) pins down how the agent’s equilibrium utility varies as a function of the current type

t.

It is a dynamic generalization of the static envelope theorem

formula sometimes referred to as the “Mirrlees’s trick” (Mirrlees, 1971). (Of course, the static result obtains as a special case when T = t = 1.) As suggested in the heuristic derivation preceding 12

Depending on the notion of IC used, full support may not be needed as long as IC is imposed for all types in However, without it, the interpretation becomes an issue. For example, consider a static model where 1 = [0; 1] but where F assigns probability one to the set f0; 1g. Is this a model with a continuous type space in which IC is imposed for all 1 2 [0; 1], or a model with two types with IC imposed only on 1 2 f0; 1g? 13 This presumes the assumptions have to be stated separately for the primitives U and F . A weaker joint (or “reduced form”) assumption imposing restrictions directly on the expected payo¤ would su¢ ce. t.

14

the result, an in…nitesimal change in the current type has two kinds of e¤ects in a dynamic model. First, there is a direct e¤ect on the …nal utility from decisions, which is captured by the partial derivative of U with respect to

t.

This is the only e¤ect present in static models. With more

than one period, there is a second, indirect, e¤ect through the impact of the current type on the distribution of future types. This is captured by the sum within the expectation. The e¤ect of the current type respect to the period

t

t.

on the distribution of period

type is captured by the partial derivative of F with

The induced change in utility is evaluated by considering the partial derivative of

value function V with respect to

.

Remark 1 We have assumed that the information the agent receives in each period (his current type) is one-dimensional. If in a given period the agent’s current type were multidimensional, we could still derive the same necessary condition (IC-FOC) for incentive compatibility by restricting the agent to observing each dimension of his current type at a time and reporting each dimension before observing the subsequent ones. (This restriction only reduces the set of possible deviations and therefore preserves incentive compatibility.) However, incentive compatibility is harder to ensure when the agent observes several dimensions at once (see Remark 2 for more detail). 3.2.2

Role of the assumptions

To better appreciate the role of the assumptions in Proposition 1, it is useful to consider a few counterexamples. The …rst one illustrates the role of Assumptions 1 and 3. The other two illustrate the role of assumption 5. Example 1 (Lack of full support) Consider the following simple quasi-linear environment where T = 2,

1

= (0; 1),

2

= (0; 3), Y1 = ?, y2 = (x; p) 2 Y2 = f0; 1g

F2 ( 2 j 1 ) =

The agent’s payo¤ is U ( ; y) =

2x

8 > 0 if > > > > > > < (1 1 > > > > 1 > > > : 1 if

2

R, and

<0

1) 2

if

1

if

2

1

+

1( 2

2

2 [0; 1)

2 [1; 2)

2) if

2

3

2

2 [2; 3)

p. This environment corresponds, for example, to a setting

where the agent is a buyer whose period-1 type represents the probability he assigns to his period-2 valuation for an indivisible object (denoted by deterministic mechanism ( 1;

2)

=

(

2)

being higher than 2. Now consider the following

(1; p) if

2

2 [p; 3)

(0; 0) otherwise 15

with p 2 [2; 3).14 That is, there is a posted price p in period 2. It is easy to see that this mechanism is IC at any history. The value function, evaluated at period-one history, is thus V ( 1 ) = E[ 2 j [p; 3)] Pr(

pj 1 ) =

2

p+3 1 (3 2

p). The derivative of this function with respect to

1

2

2

depends on p,

which is in contrast with what is predicted by (IC-FOC). The example also illustrates the failure of the revenue equivalence result for quasi-linear settings documented in the static literature; we will come back to the relation between this result and Proposition 1 in Section 4. Example 2 (Discontinuos transitions) Next, consider the same example discussed above but now assume that

1

=

2

= (0; 1) and that

F2 ( 2 j 1 ) =

8 < :

2

if

1

< 1=2

2 2

if

1

1=2

Now consider the following deterministic mechanism:

( 1;

2)

=

(

(1; p) if

1

2 [:5; 1)

(0; 0) otherwise

with p 2 (1=2; 2=3). That is, there is now a forward contract o¤ ered in period 1 at price p for delivery

at period 2. This mechanism is clearly IC at any history. The corresponding value function is 8 <0 V ( 1) = :2

p

3

if

1

if

1

<

1 2 1 2

The value function is thus not Lipschitz continuous in this example and, once again, revenue equivalence fails to obtain. Example 3 (Lack of equi-Lipschitz continuity) As another example of the role that assumption 5 plays for the result in Proposition 1, consider an environment in which Y1 = (0; +1), Y2 = ?, both

1

1

and

= 2

2

= (0; 1) and where, for any y1 ; F2 ( 2 j 1 ; y1 ) is continuously di¤ erentiable in

but is not equi-Lipschitz continuous in

1:

The agent’s payo¤ is U ( ; y) =

2:

Then

consider the following mechanism ( 1 ) = arg max

y1 2Y1

Z

2 dF2 ( 2 j 1 ; y1 )

By construction, the mechanism is IC at any history. Furthermore, by assumption, for any y1 , the R function g( 1 ; y1 ) 2 dF2 ( 2 j 1 ; y1 ) is continuously di¤ erentiable in 1 : Following Example 1 in 14

In this example, we are abusing notation by letting (x; p):

(x; p) denote the distribution that assigns measure one to

16

Milgrom and Segal (2002), one can then …nd transitions F2 such that the derivative of g( 1 ; y1 ) with respect to in

1

is not bounded by any integral function which make the value function discontinuous

1:

3.2.3

Closed-form expression for expected payo¤ derivative

The recursive formula for the partial derivative of V

with respect to current type

t

in Proposition

1 can be iterated backwards to get a closed form formula. Although this can in principle be done under the assumptions of the proposition, a more compact expression obtains if we make the following additional assumption. Assumption 7 For all t and all

t 1

; yt

1

solutely continuous and its density satis…es ft

2

tj

t 1

Yt

t 1

; yt 1

1,

the function Ft j

> 0 for a.e.

t

2

t 1

; yt

1

is ab-

t.

The existence of a strictly positive density allows us to write the formula in terms of expectation operators rather than integrals. Using iterated expectations then yields the following result. Proposition 2 Suppose Assumptions 1-7 hold. (In the Markov case, Assumption 6 can be disis IC at the truthful history ht

pensed with.) If

V ( t ; ht

1

(

t 1

;

t 1

; yt

1 ),

then

1

) is Lipschitz continuous in t , and for a.e. t , " T # X ~; y~) @U ( @V ( t ; ht 1 ) t 1 = E [ ]j( t ;h ) Jt (~ ; y~ 1 ) ; @ t @ =t

t

where Jtt (~ ; y~t

1)

(2)

1 and

Jt ( ; y

1

X

)

K Y

Illkk 1 (

lk

; y lk

1

) for

> t;

K2N, l2NK : k=1 t=l0 <:::

with Ilm (

m

; ym

1

)

@Fm ( m j m 1 ; y m 1 )[email protected] l : fm ( m j m 1 ; y m 1 )

Proof. We proceed by backward induction. For t = T the claim follows immediately from Proposition 1. Suppose now that it holds for all

> t for some t 2 f1; : : : ; T

17

1g. We will show that it

holds for t. Using iterated expectations and the induction hypothesis, ICFOC can be written as @V

t; h

@

t 1

= E

t

[ ]j(

t

;ht 1 )

[ ]j(

= E

[ ]j( t ;ht

t

2

T X @U (~; y~) + It @ t T

Ilm

Intuition for (2) is as follows. l

about signal

m.

1

=t+1

1

; y~

T

~ ; y~

where the last equality follows by the de…nition of Jt signal

~ ; y~

@V (~ ; ~ @

X X ~ s 4 @U ( ; y~) + It ~ ; y~ 1 J s ~ ; y~s @ t s= =t+1 2 3 T @U ~; y~ X 1) 1 ~ 4 5; Jt ; y~ @ =t

;ht 1 )

= E

"

1

1

#

@U ~; y~ @

s

3 5

.

can be interpreted as the “direct informational index” of

Jt can be interpreted as “total informational index” of

It incorporates all the ways in which

1)

can a¤ect

t

about

t.

, both directly and through the intermediate

signals observed by the agent. Note that in calculating Jt each possible chain of e¤ect must be counted exactly once. For example, in the Markov case, Ilm = 0 for l < m 1, and hence Y k Jt ;y 1 = Ikk 1 (~ ; y~k 1 ). More generally, the following example suggests that the total k=t+1

informational indices could be viewed as “impulse responses” of the stochastic process for in…nitesimal change in Example 4 Let

t

to an

t.

evolve according to an AR(k) process:

t

=

k X

+ "t ,

j t j

j=1

where

t

= 0 for any t

0;

j

2 R for any j = 1; :::; k, and "t is the realization of the random

variable ~"t distributed according to some cdf Gt with strictly positive density over R, independent from all ~"s , s 6= t. For convenience, hereafter we let F

so that for any

1

j

;y

1

0 for all j > k. Then

j

0

k X

=G @

1

jA ;

j

j=1

> t, It

;y

1

@F

j

f ( j

18

1

;y 1

;y

1

[email protected] 1)

t

=

t;

and Jt

1

;y

M Y

X

=

lm l m

1

:

M 2N, l2NM :t=l0 <:::

;y

Thus in this case the total informational index Jt

is simply the “impulse response func-

tion” for the AR(k) process. Note also that here the total informational index is only a function of t and

but not of ( ; y). In the special case of an AR(1) process we have

It

which implies that Jt

3.3

;y

1

=(

;y

1

1)

t

=

(

1

0

if

=t+1

otherwise,

.

Necessary Conditions for IC: Independent Shocks

In this section, we illustrate an alternative approach to the characterization of the agent’s payo¤ in a dynamic mechanism. This approach is based on the idea that any stochastic process admits an equivalent representation in which the information the agent receives over time can be described as a function of “shocks” that are serially independent (see also Eso and Szentes, 2007, for a similar approach in a two-period-one-decision model). This approach complements the one illustrated in the previous section in that it leads to a di¤erent set of assumptions on the primitive environment that guarantee that the agent’s payo¤ in any incentive-compatible mechanism can be pinned down by an envelope condition. We start by de…ning what we mean when we say that a process admits an independent-shock representation. Next, we de…ne in what sense this representation is “equivalent”to the original one and hence can be used as an alternative approach to the characterization of incentive-compatible mechanisms. We then proceed by showing how the formula for the (derivative of the) agent’s payo¤ identi…ed in the previous section simpli…es when the agent is asked to report the shocks instead of his types. Finally, we conclude by showing that any stochastic process admits a particular independentshock representation. We then use this canonical representation to identify conditions for the primitive environment that guarantee that in the corresponding independent-shock representation the agent’s reduced-form payo¤ satis…es the analog of the envelope formula derived in the previous section. While these conditions di¤er from the ones identi…ed above they lead to the same payo¤ formula when the latter is expressed in terms of the primitive representation. De…nition 7 Let ~" tribution G, and let z

(~"t )Tt=1 denote a random vector with support E zt : E t

representation for F = Ft :

t 1

Yt

Yt

1

1

!

!

T t t=1 .

(

t)

19

T E t=1 t

RT and dis-

We say that (G; z) is an independent-shock

T t=1

if

QT

(i) for each t, there exists a probability measure Gt on Et such that, for any " 2 E, G(") =

t=1 Gt ("t );

and

(ii) for any t and any "t

the distribution of

t

given

1

2 Et

yt 1

and

1,

the distribution of zt (~"t ; y t

t 1

=

z t 1 ("t 1 ; y t 2 )

(z1 ("1 ); z2

Together, conditions (i) and (ii) simply say that, for any y t primitive information

t

given ~"t

1)

1,

1

("2 ; y

= "t

1

is the same as

1 ); :::; zt 1 ("

t 1 ; y t 2 )).

one can think of the agent’s

as generated by independent “shocks”~"t .15

Example 5 Consider the AR(k) process described in 4. In this example, the functions zt do not depend on y. They are given by z1 ("1 ) = "1 z2 ("2 ) = z3 ("3 ) =

1 ( 1 "1

zt ("t ) =

Suppose now that

Pt

j=1

+ "2 ) +

2 "1

1 "1

+ "2 2 1

+ "3 = (

+

2 )"1

+

1 "2

:::

2

M Y

X

4

lm lm

1

M 2N, l2NM :j=l0 <:::

+ "3

3

5 "j :

is generated by independent shocks ". Assume further that the agent

observes not only , but also the shocks ". Let his payo¤ (de…ned over " and y) be described by the function ^ ("; y) U

U (z("; y); y)

(3)

= U (z1 ("1 ); z2 ("2 ; y1 ); :::zt ("t ; y t

1

); :::; zT ("T ; y T

1

); y T ):

Next, consider a (randomized direct) mechanism ^

D

^t : Et

Yt

1

!

ET (Yt )

t=1

;

in which the agent reports the shocks " instead of his primitive payo¤-relevant information . The ^ ; G; Z) in the following sense. primitive representation (U; F ) is equivalent to the representation (U ^ t ( jz t (~"t ; y t 1 )) denote the regular conditional probability distribFor any y t 1 2 Y t 1 , let G

ution of the vector ~"t given the sigma-algebra

(z t (~"t ; y t

1 ))

generated by the random variable

z t (~"t ; y t 1 ).16 15

A more general de…nition of an independent-shock representation allows the shocks ~" to depend on the decisions y. However, as will become clear below, for any such representation there is an equivalent one (in the sense of Lemma 2) where the shocks do not depend on y. 16 Such a regular conditional probability distribution here exists since "t 2 Rt . See, e.g., Dudley (2002).

20

Lemma 2 (a) Given any ex-ante IC mechanism for the primitive representation (U; F ), there ^ ; G; z) exists an ex-ante IC mechanism ^ for the corresponding independent-shock representation (U Yt , and any ( t ; y t

such that, for any t, any measurable A Z

^ (Aj"t ; y t

1

^ t ("t jz t ("t ; y t )dG

1

)=

t

1 ),

)=

t (Aj

t

; yt

1

):

(4)

^ ; G; z), there (b) Given any ex-ante IC mechanism ^ for the independent-shock representation (U exists an ex-ante IC mechanism measurable A

for the primitive representation (U; F ) such that, for any t, any

t

Yt , and any ( ; y t

1 ),

(4) holds.

Hence any outcome (i.e., any joint distribution over agent report

Y ) that can be sustained by having the

can also be sustained by having him report the shocks ", and vice versa. Note that

Part (a) follows directly from the fact that if the mechanism ^ de…ned by ^ ( j"t ; y t

1

)=

t(

jz t ("t ; y t

1

); y t

1

is ex-ante IC, then the mechanism

) 8("t ; y t

1

)

is also ex-ante IC. This mechanism de facto uses the same information as

(5) , in the sense that it

depends on " only through z("; y). Part (b) is also trivially satis…ed. It su¢ ces to construct from ^ using the transformation de…ned in (4). To see that if ^ is ex-ante IC, so is , it su¢ ces to note that (i) payo¤s depend on the shocks " only thought z("; y), (ii) induces the same distribution over Y as ^ , and (iii) any distribution over Y that the agent can induce given could ^ also have been induced given . Suppose now that an independent-shock representation exists. (We will show below that this is always the case.) One can then use this representation as an alternative route to the characterization of the properties of dynamic incentive-compatible mechanisms. In particular, one can treat the shocks as the agent’s private information and then use the formula in Proposition 1 to pin down the (derivative of the) value function. To this aim, let ^ H

("s ; mt ; y u ) 2 E s

Et

Yu

with s

t

u

s

1

^ 2 H, ^ let denote the set of all possible histories in the extensive form corresponding to ^ . For any h ^ denote the (unique) probability measure over E E Y induced by assuming that following ^ [ ^ ]jh

^ in the mechanism ^ , the agent reports truthfully at any subsequent information set. Fihistory h ^ ^ ^ Now assume each Et denote the agent’s value function in ^ evaluated at history h. nally, let V^ (h) R ^ is equi-Lipschitz continuous and difis an interval, with j"t jdGt < +1, and that the function U ^ t 1 = ("t 1 ; "t 1 ; y t 1 ), ferentiable in each "t . We then have that, if ^ is IC at the truthful history h

21

then ^ ^t V^ ("t ; h

1

) is Lipschitz continuous in "t , and for a.e. "t , " # ^ ^ t 1) ^ (~"; y~) @ V^ ("t ; h ^t 1 @U ^ [ ^ ]j"t ;h =E : @"t @"t

(6)

While this formula can be read as a special case of (IC-FOC), the proof for this result is signi…cantly simpler and follows essentially from the same arguments as in a static setting (see, e.g., Milgrom and Segal, 2002). Condition (6) thus provides an alternative representation of how the agent’s payo¤ must vary with the agent’s private information in a dynamic IC mechanism. In certain applications, working with such a representation may facilitate the characterization of the properties of optimal mechanisms. At this point one may wonder which F admit an independent-shock representation and which environments (U; F ) admit an independent-shock representation for which (6) holds (i.e., for which ^ ("; y) is equi-Lipschitz continuous and di¤erentiable in the corresponding reduced-form payo¤ U each "t ). We address each of these questions in turn. First, we show that any F admits a particular independent-shock representation, which henceforth we refer to as the canonical representation. This representation is derived from F as follows. Let ~" denote a vector of independent random variables, each uniformly distributed over (0; 1). Next, for any t and any " 2 (0; 1), let Ft 1 ("j

t 1

; yt

1

)

inff

t

: Ft ( t j

t 1

; yt

1

)

"g

denote the generalized inverse of the kernel Ft . Then let zt ("t ; y t

1

Ft 1 ("t j F1 1 ("1 ); F2 1 ("2 j F1 1 ("1 ); y1 ); :::; y t

)

1

):

(7)

Applying the quantile function theorem recursively(see, e.g., Angus, 1994), one can then show that, given any y t

1

and any "t

1

2 Et

the same as the distribution of

t

1

(0; 1)t

given y t

1

1,

the distribution of zt (~"t ; y t

and

t 1

1)

given ~"t

1

= "t

= (F1 1 ("1 ); F2 1 ("2 j F1 1 ("1 ); y1 ); :::; y t

1

is

1 ).

Hence, any process admits an independent-shock representation in which, for any t, Gt is simply the uniform distribution over (0; 1) and where the functions zt : E t

Yt

1

!

t

are as in (7).

Using the canonical representation, we can identify conditions on the primitive environment ^ in the canonical representation is equi(U; F ) that guarantee that the corresponding payo¤ U Lipschitz continuous and di¤erentiable in zt , for any t. Assumption 8 For all y 2 Y , U ( ; y) is equi-Lipschitz and continuously di¤ erentiable. 22

Assumption 9 For all t, all " 2 (0; 1), and all y t

2 Yt

1

continuously di¤ erentiable.

Assumption 10 For all (

t 1

; yt

1)

tinuously di¤ erentiable.

2

t 1

Yt

1,

Ft 1 ( j

1,

Ft 1 ("j ; y t

t 1

; yt

1)

1)

is equi-Lipschitz and

is equi-Lipschitz and con-

We then have the following result. Proposition 3 Suppose the primitive description of the environment (U; F ) satis…es assumptions ^ obtained 1, 2, 8, 9 and 10. Then, in the corresponding canonical representation, the function U from (U; F ) using the transformation in (3) is equi-Lipschitz continuous and di¤ erentiable in ". It ^ t 1 = ("t 1 ; "t 1 ; y t 1 ) only if the value follows that a mechanism ^ is IC at the truthful history h ^ ^t function V^ ("t ; h

1)

satis…es (6).

^ ("; y)[email protected]"t exists and is continuous in "t Proof. That, under the Assumptions 8, 9 and 10, @ U ^ ("; y)[email protected]"t follows from standard results in calculus (see, e.g., Rudin, 1976, Theorem 9.18.). That @ U is bounded uniformly over ("; y) follows from the same conditions along with Assumption 2. It ^ is equi-Lipschitz continuous and di¤erentiable in ". The result then follows that the function U follows directly from this property together with the fact that Et = (0; 1) is both connected and bounded.

Proposition 3 thus identi…es a new set of conditions for the primitive environment (U; F ) that guarantee that in any IC mechanism, the agent’s expected payo¤, when expressed using the canonical representation, satis…es the envelope formula of (6). Comparing the conditions in this proposition with those in Proposition 1, one can see that while the assumptions in Proposition 1 rule out, for example, an atom at

t

=

# t

that “shifts”with the past

t 1

(e.g., fully persistent types),

such a possibility is accommodated by the assumptions in Proposition 3. On the other hand, the assumptions in Proposition 3 rule out an atom at

t

=

# t

whose measure grows with

t 1

while

such a possibility is allowed by the assumptions in Proposition 1. The assumptions in the two propositions are thus not nested and possibly capture di¤erent environments. To see how the formula in (6) compares to the closed-form one in (2), we proceed as follows. Take any mechanism for the primitive representation (U; F ) and let ^ be the mechanism in the corresponding independent-shock representation that is obtained from using (5). Because, for any y, the agent’s payo¤ in ^ depends on " only through = z("; y), we have that, for any y t 1 and any "t the following identity holds: ^ V^ ("t ; "t

1

; yt

1

) = V (z t ("t ; y t

23

1

); z t

1

("t

1

; yt

2

); y t

1

):

(8)

^ Therefore, at any point of di¤erentiability of V^t in "t , ^

@ V^ ("t ; "t 1 ; y t @"t

1)

=

@V (z t ("t ; y t

1 ); z t 1 ("t 1 ; y t 2 ); y t 1 )

@

@zt ("t ; y t @"t

t

1)

:

(9)

Using (9) and (3), the formula in (6) can then be rewritten as @V (z t ("t ; y t

1 ); z t 1 ("t 1 ; y t 2 ); y t 1 )

@

@zt ("t ; y t 1 ) @"t " T T "T ; y eT ^ t 1 X @U (z (~ ^ = E ^ [ ]j"t ;h @ s s=t

t

1 ); y eT ) @zs (~"s ; yes 1 )

@"t

#

: (10)

Note that this formula applies to all independent-shock representations of F , not only to the canonical one. In the special case in which (G; z) is the canonical representation we have @zt ("t ; y t @"t

1)

=

@Ft 1 ("t j F1 1 ("1 ); F2 1 ("2 j F1 1 ("1 ); y1 ); :::; y t @"t

1)

;

whereas for any s > t, @zs ("s ; y s @"t

1)

=

s 1 X

Asj ("s ; y s

1

j=t

=

@zt ("j ; y j @"t

1)

with Asj ("s ; y s

1

s,

1

1

1)

(11)

) + Ast+1 ("s ; y s

1

t+1 t )At+1 ;y ) t ("

t+2 t+1 t+1 t t+2 t+1 )(At+2 ; y ) + At+1 ; y )At+2 ; y )) + :::]; t (" t (" t+1 ("

@Fs 1 ("s j F1 1 ("1 ); F2 1 ("2 j F1 1 ("1 ); y1 ); :::; y s @ j

)

One can then show that if Ft ( t j and di¤erentiable in each

@zj ("j ; y j @"t

[Ast ("s ; y s

+ Ast+2 ("s ; y s

s

t 1

; yt

1)

1)

:

is strictly increasing and absolutely continuous in

t

t, then

Asj ("s ; y s and

)

1

) = Ijs ( s j

@zs ("s ;y s @"t @zt ("t ;y t @"t

s 1

; ys

1

)

s =z s ("s ;y s 1 )

;

1) 1)

= Jts (z s ("s ; y s

1

); y s

1

);

so that (6) coincides with (2)— the details are in the Appendix. We then have the following result. Proposition 4 Suppose the primitive environment (U; F ) satis…es assumptions 1, 2, 3, 7, 8, 9, 24

and 10. Then the conclusions of Proposition 2 hold. Note that assumptions 1, 2, 3 and 7 are also present in Proposition 2. Assumption 8 is stronger than assumption 7 (it implies the latter). On the other hand, assumptions 5 and 6 are not implied by assumptions 9 and 10. The two propositions thus identify di¤erent sets of necessary conditions for the validity of the closed-form formula given in (2).

3.4

Su¢ cient conditions for IC

While formula (2) summarizes local (…rst-order) incentive constraints, it does not imply the satisfaction of all (global) incentive constraints. In this section we formulate some su¢ cient conditions for incentive compatibility. These conditions generalize the well-known monotonicity condition, which together with the …rst-order condition characterizes incentive-compatible mechanisms in the static model with a one-dimensional type space. The static characterization cannot be extended to the dynamic model, which could be viewed as an instance of multidimensional mechanism design problem, for which the characterization of IC mechanisms is more di¢ cult (see, e.g., Rochet and Stole, 2003). More precisely, there are two sources of di¢ culty in ensuring incentive compatibility of a dynamic mechanism: (a) in general one needs to consider multiperiod deviations, since once the agent lies in one period, his optimal continuation strategy may require lying in subsequent periods as well;

17

and (b) even if one focuses on single-period deviations, in which the agent misrepresents

his current one-dimensional type, the decisions assigned by the mechanism from that period onward form a multidimensional decision space. While these problems make it hard to have a general characterization of incentive compatibility, we can still formulate su¢ cient conditions for IC that prove useful in a number of applications. Problem (a) is sidestepped by focusing on environments in which we can assure that truthtelling is an optimal continuation strategy even following deviations, and so incentive compatibility can be assured by checking one-period deviations. (While this focus is quite restrictive, it includes all Markov environments, as well as some other interesting cases— see for example the application to sequential auctions with AR(k) values considered in subsection 5.2. Problem (b) is sidestepped by formulating a monotonicity condition that, while not necessary for IC, is su¢ cient and is easy to check in applications. 17

It is possible to ensure that truthtelling is optimal even after deviations by allowing the agent to re-report his complete history t in each period t, possibly contradicting his earlier reports. This is the version of the revelation principle proposed by Doepke and Townsend (2006). While this approach would allow us to restrict attention to onestage deviations from truthtelling, the deviations in each period would now be multidimensional, and contingent on possibly inconsistent reporting histories, so it is not clear that this approach would simplify formulation of su¢ cient conditions.

25

Proposition 5 Suppose the environment satis…es either the assumptions of Proposition 2 or those of Proposition 4. Fix any period t and for any period-t history h, let

D (h)

E

[ ]jh

"

# ~; y~) @U ( : Jt (~ ; y~ 1 ) @ =t

T X

Suppose that for any truthful history t 1 ; t 1 ; y t 1 , t 1 ; t ); t 1 ;y t 1 ) [U (~; y (i) E [ ]j(( ~)] is Lipschitz continuous in d E d t

[ ]j((

t 1

; t );

(ii) For any mt , for a.e. D (iii) Then

(

t 1

; t ); (

t 1

t 1

;y t

1)

[U (~; y~)] = D

t,

and for a.e.

(

t 1

; t ); (

t 1

; t ); (

t 1

; mt ); y t

; t ); y t

t, 1

:

t,

; t ); y t

1

D

(

t 1

1

(

t

mt )

0;

is IC at any (possibly non-truthful) period t + 1 history. is IC at any truthful period-t history.

Propositions 2 and 4 imply that condition (i) in Proposition 5 is a necessary condition for the mechanism to be IC at all truthful period-t histories (Recall that this means that the agent’s value function at these histories coincides with the expected equilibrium payo¤). The addition of conditions (ii) and (iii) is then su¢ cient (but in general not necessary) for IC at all truthful periodt histories— The proof is based on a lemma in the appendix that extends to a dynmic setting a result by Garcia, 2005 for static mechanism design with one-dimensional type and multidimensional decisions. The assumption that the mechanism is IC at all period t + 1 histories, including non-truthful ones, is rather strong, but it can be satis…ed in some applications. As one prominent example, in a Markov setting, the history t + 1 after

t+1

t

of the agent’s true types does not a¤ect his incentives in period

is observed. Thus, any mechanism that is IC at all truthful period t + 1 histories

must also be IC at all period t + 1 histories. In this case, the Proposition can be iterated starting from period T + 1 moving backward to establish IC in all periods and at all histories.

4

Multiagent quasilinear case

We now introduce multiple agents. The multiagent model we consider features three important assumptions: (1) the environment is quasilinear (i.e., the decision taken in each period can be decomposed into an allocation and a vector of monetary payments and the agents’preferences are quasilinear in the payments), (2) the type distributions are independent of past monetary payments 26

(but they may still depend on past allocations), and (3) types are independent across agents. After setting up the model we show how from the perspective of an individual agent the model reduces to the single-agent case studied in the previous section.

4.1

Quasilinear environment

There are N agents indexed by i = 1; : : : ; N . In each period t = 1; : : : ; T; each agent i is shown a nonmonetary “allocation” decision xit 2 Xit (where Xit is a measurable space), and given a QN 18;19 payment pit 2 R. The set of feasible joint allocation decisions in period t is Xt i=1 Xit . Each agent i observes his own allocations xit but not the others’allocations x

i;t :

The observ-

ability of xit should be thought of as a technological restriction: A mechanism can reveal more information to agent i in period t than xit , but it cannot conceal xit . As for the payments, because none of the results hinges on the speci…c information the agents receive about p, we leave the description of the information the agents receive about p unspeci…ed. As in the single-agent case, histories are denoted using the superscript notation. For example, Qt QT is an element of X t RN t , where X t =1 X and X =1 X .

xt ; pt

In each period t, each agent i privately observes his current type it 2 it R. The current Q type pro…le is then denoted by t ( 1t ; : : : ; N t ) 2 t i it . The distribution of the type QT pro…le 2 t=1 t is described in the following de…nition. We omit superscripts for full histories, with the exception of xTi

and

T i

xt

(x1t ; :::; xN t ) and xi

(

i1; :::; iT )

(xi1; :::; xiT ), pTi

(pi1; :::; piT ),

(and the sets they are elements of). This is to avoid confusion between, e.g., (xi1 ; :::; xiT ).

Agent i’s payo¤ function is denoted Ui :

RT ! R.

X

We then de…ne a quasi-linear environment as follows.

De…nition 8 The environment is quasilinear if the following hold: RN T , where x = xT1 ; : : : ; xTN is an allocation,

1. There is a sequence of decisions (x; p) 2 X

p is a vector of payments, and for all i, xTi is the minimal information about x received by agent i.

2. The distribution of the type pro…le 18

For example, we can have Xt =

(

xt 2

RN +

is governed by kernels Ft : :

X

xit

xt

i

)

t 1

Xt

1

!

(

T t ) t=1 .

when the decision is the allocation of a private good

among agents, or Xt = xt 2 RN + : x1t = x2t = ::: = xN t when the decision is the provision of a public good. 19 This formulation does not explicitly allow for decisions that are not observed by any agent at the time they are made, but such decisions could still be modeled if need be by creating a …ctitious agent observing them.

27

3. For all i, the payo¤ function of each agent i, Ui :

RT ! R, takes the quasilinear

X

form Ui ( ; x; pTi ) = ui ( ; x)

T X

pit

t=1

for some measurable ui :

X ! R.

Note that part 2 restricts the distribution of

to be independent of the payments. As for part

3, note that for the sake of generality we allow agent i’s utility to depend on things he does not observe, namely xT i and

T

20 i. t 1

De…nition 9 Types are Independent if for all t, and all

Ft ( j where for all i, Fit ( j

t 1 t 1 i ; xi )

t 1

; xt

1

)=

N Y i=1

Fit ( j

; xt

1

2

t 1

Xt

1,

t 1 t 1 i ; xi );

is a probability measure on

it .

This de…nition is the proper extension of the Independent-Type assumptions to dynamic settings, allowing us to extend such static results as revenue-equivalence and virtual surplus representation of expected pro…ts. Note that the de…nition can be broken up into three parts: (i) Conditional on any history (

t 1

; xt

1 ),

period-t types are independent across agents. (ii) The

distribution of agent i’s period-t type, Fit ( j

t 1 t 1 i ; xi ),

does not depend on the other agents’past

types (except possibly indirectly through the decision history xti distribution Fit ( j

t 1 t 1 i ; xi )

1

observed by agent i). (iii) The

also does not depend on the history of decisions xt

1 i

that the agent has

not observed. It is easy to see that if the assumptions (i) or (ii) are not satis…ed, then a mechanism similar to the one proposed by Cremer and McLean (1988) could be used to extract the agents’ information rents. It turns out that a similar extraction of rents is possible if assumption (iii) is not satis…ed by using a randomized mechanism— see the discussion after Proposition 6 below.

4.2

Multiagent mechanisms

For most of the analysis we will focus on the Bayesian Nash Equilibria (BNE) of mechanisms designed for the environment described above. As discussed for the single-agent case, this solution concept imposes the weakest form of rationality on the agents’ behavior and thus yields the 20

Some readers may feel that an agent must always be able to observe his own …nal payo¤ (indeed, this was the case in our model in Section 3). This can still be compatible with an interdependent-value model in which agent i observes xT i and T i at the end of period T and is unable to report them to the mechanism. If we instead allowed the agent to report his observed …nal payo¤ in an interdependent-value model to the mechanism, as in Mezzetti (2004), we would e¤ectively convert the model to one with correlated private observations, allowing for full surplus extraction.

28

strongest necessary conditions for incentive compatibility. The su¢ cient conditions we o¤er, will however ensure implementation with a stronger solution concept such as (weak) Perfect Bayesian Equilibrium. By the revelation principle (adapted from Myerson, 1986), it is without loss of generality to restrict attention to Bayesian incentive compatible “direct mechanisms” (de…ned more precisely below) where (1) in each period each agent con…dentially reports his current type

it

to the mecha-

nism, and (2) the mechanism reports no information back to the agents (i.e., each agent i observes only (

T T i ; xi )

plus whatever is assumed observable about the payments).21 The proof for (1) is the

familiar one. As for (2), suppose there exists an incentive-compatible direct mechanism where more information is revealed to the agents than what described in (2). Concealing this additional information would then amount to pooling di¤erent incentive-compatibility constraints resulting in a new IC mechanism that implements the same outcomes (i.e., the same distribution over

RN T ).

X

When exploring the implications of incentive compatibility, it is also convenient to assume that all payments take place at the very end. This is actually without loss of generality. In fact, because postponing payments amounts to hiding information, for any IC mechanism in which some payments are made (and possibly observed) in each period, there exists another IC mechanism in which all payments are postponed to the end which induces the same distribution over

X and,

for all , it induces the same total payments. For notational simplicity hereafter we restrict attention to deterministic mechanisms. This entails no loss since randomizations could always be generated by introducing a …ctitious agent whose type is publicly observed. We will also formulate su¢ cient conditions under which such randomizations will not be useful. De…nition 10 A deterministic direct mechanism is a pair h ; i, where an allocation rule, and

:

!

RN

=

t

:

t

is a (total) payment scheme.

! Xt

T t=1

is

A deterministic direct mechanism h ; i de…nes the following sequence in each period t, following

a history

t 1

of type observations and a history mt

1

= mt1 1 ; : : : ; mtN 1 of type reports by the

agents: 1. Each agent i privately observes his current type 2. Each agent i sends a con…dential message mit 2 3. The mechanism implements the decision 4. Each agent i observes

it

t

it

2

it

it

drawn from Fit ( j

t 1 i ;

t 1 t 1 )). i (m

to the mechanism.

mt .

mt .

21

In our environment there are no actions to be privately chosen by the agents. If the agents were also to choose hidden actions, then a direct mechanism would also send the agents recommendations for the hidden actions.

29

mT .

After period T , the mechanism also implements the payments

A mechanism induces an extensive form game between the agents. A (pure) strategy for agent i is a complete contingent plan i

it

:

t 1 i

t i

Xit

1

!

T it t=1 :

Truthful strategies are de…ned as in the single-agent case. If all agents play truthful strategies, a deterministic allocation rule process on the agents’types

described by the kernels Ft ( j

the resulting probability measure on while agent i follows a strategy

i,

probability measure on

;

t 1 ( t 1 )).

We let [ ] denote

. Similarly, if all agents but i are playing truthful strategies,

this induces a stochastic process on

described by the kernels F , allocation rule , and strategy T. i

t 1

induces a stochastic

i.

We let

i[

; mTi 2 ;

i]

T, i

which is

denote the resulting

Equipped with this notation, we can de…ne ex-ante incentive

compatibility of a mechanism as follows. De…nition 11 A deterministic direct mechanism h ; i is ex-ante Bayesian Incentive Compatible

(ex-ante BIC) if for all i and all E

[ ]

[ui (~; (~))

i,

~)]

i(

E

i[

;

i]

T [ui (~; (m ~ Ti ; ~ i ))

T ~ Ti ; ~ i )]: i (m

That is, a mechanism is ex-ante BIC if the truthful strategies form a Bayesian Nash Equilibrium of the game induced by the mechanism.

4.3

Mapping the multiagent into the single-agent case

We now show that, from the perspective of each agent, the environment described above can be mapped into the single-agent model considered in Section 3. To see this, …x an arbitrary agent i. Given any deterministic mechanism h ; i, when agents

i are playing truthful strategies, agent

i e¤ectively faces a randomized mechanism where the randomizations are due to the uncertainty that agent i faces about the other agents’types. Over the course of the mechanism, agent i only observes (

T T T i ; mi ; xi ).

T

However, the mechanism depends on the other agents’ types

their equilibrium messages; furthermore, agent i’s utility may depend directly on

T

i

evaluating the optimality of i’s strategy requires keeping track of his beliefs about

i

through

and xT i . Thus T

i

conditional

on the observed history. Formally the problem faced by agent i can be mapped into the single-agent model considered in the previous section as follows. For all t < T , let Yit = Xit , and let YiT = XiT

XTi

T

i.

Also, let Yi;T +1 = RT . That is, in periods t < T the decision yit = xit consists of the part of the allocation observed by agent i. In period T , the decision yiT also shows the agent the rest of the 30

variables a¤ecting his utility (i.e., the part of the allocation xT i unobservable to him and the other agents’types

T

i ).

Then in period T + 1, which is introduced just as a convenient modelling device,

the agent observes his payment pTi : T i

Agent i’s type Fit :

t 1 i

Yit

1

t 1 i

evolves according to the kernels Fi = Fit :

!

(

period T + 1 (formally,

it )

T , t=1

i;T +1

However, by construction

1

!

T it ) t=1

(

=

where the equality is by de…nition of Yit . There is no type in

can be taken to be an arbitrary singleton).

In the single-agent setup, agent i’s payo¤ is de…ned over T i

Xit

YiT +1

YiT +1 , where Yi =

T i

is simply a reordering of

RT —

X

QT +1 t=1

Yit .

the domain of agent

i’s payo¤ in the multiagent environment. To highlight this connection, we abuse notation and continue to use Ui with its arguments appropriately reordered. Agent i faces a randomized mechanism structed as follows.

i

=

i[

; ]

it

:

t i

Yit

1

!

(Yit )

T +1 t=1

con-

We …rst construct inductively a consistent family of regular conditional

probability distributions (rcpd) that represent the evolution of agent i’s beliefs about

T

con-

i

ditional on observable allocations and his own messages.22 Fix t < T . Suppose that a rcpd 1 1 1 1 (mi 1 ; ~ i )) on , and all periods t. (The conditioning 1( j i i exists for all mi 1 1 ~ 1 1 here is on the random variable i (mi ; i ) taking values in Yi .) Note that the assumpt 1 tion holds vacuously for t = 1. For all mt , the rcpd t 1 ( j t 1 (mt 1 ; ~ )) and the kernels

i i i i t 1 t 1 t 1 t t F i;t ( j RN 1 , there i . Since i i (mi ; i )) induce a probability measure on t t t exists a rcpd of ~ i given ( ti (mti ; ~ i )), where ( ti (mti ; ~ i )) denotes the sigma-algebra genert ated by the random variable ti (mti ; ~ i ) (see, e.g., Theorem 10.2.2 in Dudley, 2002). We de…ne t t ~t )) to be this rcpd. Consistency of the family follows by construction. At t = T the t ( j i (mi ; i decision yiT reveals to the agent T i , and hence his beliefs are degenerate in periods T and T + 1. Let t < T and …x a history (mti ; yit 1 ). Then for any measurable A Yit , the probability that t 1 i ;

yit 2 A is t t 1 it (Ajmi ; yi )

Z

dF

f

t

i2

t

i:

The measure

g

it (

jmTi ; yiT T

Finally,

i;t (

t t )2A it (mi ; i

i;T +1 (

t 1 i ;

t 1 t 1 t 1 i (mi ; i ))d t 1

t 1 t 1 t 1 ~t 1 i ) i j i (mi ;

= yit

1

) is de…ned analogously except that the integral is over the set

i

2

jmTi ; (xTi ; xT i ;

randomized direct mechanism 22

i;t j

T

T

i

:

i ))

i

=

T T i (mi ; i );

T

T T T i (mi ; i ); i

is de…ned to be a point mass at i[

:

2A : (mTi ;

T

i ).

; ].

See, e.g., Dudley (2002) for the de…nition of a regular conditional probability distributions.

31

1

This de…nes the

Thus, from the perspective of agent i, there is a decision yit in each period t, his type

it

evolves according to kernels Fi , utility is given by Ui , and he is facing a randomized direct mechanism ( i i

s t u i ; mi ; y i )

i.

:s

This is the setup considered in the single-agent part. In particular, let Hi t

u

s

1 denote the set of the agent’s private histories. Then a strategy

and a private history hi 2 Hi induce a probability measure

i[ i;

i ]jhi

T i

on

T i

is derived from the multiagent mechanism h ; i, we abuse notation and write

i [h

Yi . Since ; i;

i ]jhi

to emphasize the connection to the original mechanism. For the truthful strategy and the null history the measure is then denoted

i[

; ]jhi and

from truthtelling following history hi is thus E

i [h ; i ; i ], [ ; ]jh i [U (~ ; y i i i ~i )]

respectively. The agent’s payo¤ = E i [ ; ]jhi [Ui (~; x ~; p~Ti )], where

the equality is by de…nition of yi . We can then de…ne the value function Vi

i[

; ]

: Hi ! R and

incentive compatibility at a private history hi analogously to the single-agent de…nitions. T[ i

It will be convenient to let private history hi . Thus,

T[ i

]jhi denote the marginal of

i[

; ]jhi on

T i

T i

YiT given

]jhi is a process on types, messages, and nonmonetary allocations,

but not on the payments (which by our convention are only made in period T + 1). The role of this notation is to highlight the fact that the stochastic process over everything but the payments in the quasilinear environment is determined by the allocation rule rule T[ i

. Since the payment scheme ]jhi are truthful), we can use

4.4

T[ i

and independently of the payment

is a deterministic function of the messages (which under T ]jhi to write agent i’s payo¤ as E i [ ]jhi [ui (~; x ~) + (~)]. i

Revenue equivalence

Suppose the assumptions in Proposition 1, or alternatively those in Proposition 3 along with assumption 3, hold for any i. We then have that in any mechanism h ; i that is IC for agent i at a truthful private history hti

in

it .

1

=

t 1 t 1 t 1 i ; i ; xi

,V

i[

; ]

t 1 it ; hi

Furthermore, under quasi-linearity, the derivative of V

i[

; ]

is equi-Lipschitz continuous t 1 it ; hi

with respect to

it

does not depend on the payment scheme. Under the assumptions of Proposition 1, this can be seen by iterating (IC-FOC) backward starting from t = T . Under the assumptions of Proposition 3 this can be seen directly from (6). In a quasi-linear environment, the aforementioned propositions thus imply that in each exante BIC mechanism, the value function of each agent i at any truthful private history hti t 1 t 1 t 1 i ; i ; xi depend on hti 1 ,

is pinned down by the allocation rule but not on

up to a constant

ki (hti 1 )

1

=

that may

it :

Using the law of iterated expectations, one can then also get rid of the dependence of this constant on the history. To see this more clearly, suppose there is a single agent i and assume, for simplicity, that there are only two periods. Now consider any two ex-ante IC deterministic D E mechanisms h ; i and ; ^ implementing the same allocation rule . Then in period two, for 32

any truthful history hi1 = ( any

i2 ,

V

i[

; ](

i2 ; hi1 )

i1 ; i1 ; ^ i[ ; ]

V

( (

i1 )),

there exists a scalar ki (hi1 ) = Ki (

i2 ; hi1 )

= Ki (

one: there exists a scalar Ki such that, for each i[ ; ] (

V

i1 )

V

^ i[ ; ] (

then have that Ki (

i1 )

i1 )

i1 ,

i1 ).

V

i[

such that, for

A similar result also applies to period ; ](

is simply the expectation of V

= Ki for all

i1 )

i1 )

i[ ; ] (

V

i[

i2 ; hi1 )

;^] (

i1 )

V

= Ki . Because

^ i[ ; ] (

i2 ; hi1 ),

we

i1 :

Clearly, the same result extends to any T: Furthermore, because the value function coincides with the equilibrium payo¤ with probability one and because the latter is simply the di¤erence T T T between the expectation of u(~i ; (~i )) and the expectation of (~i ), we have that the entire payment scheme

is uniquely determined by the allocation rule

up to a scalar.

This result extends to a setting with multiple agents, provided that types are independent: The total payment that each agent expects to make to the mechanism as a function his period-one type is uniquely determined by the allocation rule

up to a scalar Ki that does not depend on

i1 .

This

is the famous "revenue equivalence" result extensively documented in static environments. More generally, one can show that the same result extends to any arbitrary period t

1 provided that

the following condition holds. Assumption 11 (DNOT) Decisions do Not A¤ ect Types: For all i = 1; :::; N and all t = 2; :::; T , Fit

t 1 t 1 it j i ; xi

do not depend on xti

1

.

We then have the following result. Proposition 6 Suppose types are independent and that the environment satis…es for all i either the assumptions of Proposition 1, or those of Proposition 3 along with assumption (3). Consider D E any two ex-ante BIC deterministic mechanisms h ; i and ; ^ implementing the same allocation

rule . Then for all i, there exists a Ki 2 R such that23 E

[ ]

[ i (~) j ~i1 ]

E

[ ]j

[ ^ i (~) j ~i1 ] = Ki :

(12)

If, in addition, assumption DNOT holds (with N = 1; assumption DNOT can be dispensed with), then, for all i and any t; s, t E [ i (~) j ~i ]

t s E [ ^ i (~) j ~i ] = E [ i (~) j ~i ]

s E [ ^ i (~) j ~i ]:

(13)

The value of Proposition 6 is twofold: (a) it sheds light on certain real-world institutions (for example it, can be used to establish revenue-equivalence across di¤erent dynamic auctions formats); (b) it facilitates the characterization of pro…t-maximizing mechanisms by permitting one to express t Given a mechanism h ; i ; E [ ]j [ i (~)j~i ] denotes the expectation of i (~) conditional on the random variable ~ti , where, as usual, conditional expectations are interpreted as Radon-Nikodym derivatives. 23

33

the principal’s expected payo¤ as expected virtual surplus, as illustrated below. Both (a) and (b) use the result of Proposition 6 only for t = 1: However, the property that, when decisions do not ¤ect types, the di¤erence in expected payments remains constant over time in the sense of condition (13) also turns useful in certain applications. Note also that the result in Proposition 6 can be sharpened by considering a stronger solution concept. Suppose one is interested in mechanisms with the property that each agent …nds it optimal to report truthfully even after being shown at the beginning of period one the entire pro…le of the other agents’types

T

i

(which includes current and future types). Then a simple backward

induction argument similar to the one used to establish Proposition 6 implies that, for each agent, payments are uniquely determined up to a scalar not only in expectation but for each state (

T T i ; i ).

(We provide su¢ cient conditions for the resulting mechanism to indeed satisfy this robustness to information leakage in Corollary 1 below.) Lastly, note that a key assumption in Proposition 6 is that types are independent. As mentioned above, this assumption has two parts: First, it requires that, given (

t 1

; xt

1 ),

current types are

independent across agents; Second it requires that the distribution of each agent i’s current type it

depends only on objects observable to agent i, that is, on (

t 1 t 1 i ; xi ).

The importance of the

…rst part for revenue equivalence is well understood. The arguments are the same as in static environments (see, e.g., Cremer and McLean, 1988). The importance of the second part may be less obvious. To see it, suppose for simplicity there are only two periods and assume that the distribution of

i2

depends not only on

i1 ; xi1

but also on a variable x

i;1

that is not directly

observed by agent i but which is observed by the principal (or by whoever runs the mechanism). Depending on the application, one may think of x

i;1

as the amount of R&D commissioned to a

research lab (the principal) by competitive clients (the other agents). Alternatively, one may think of x

i;1

as the unobservable quality of a product supplied by the principal to buyer i. If x

known to the principal but not to agent i and if it is correlated with extract all the private information that agent i possesses about

i2

i2 ,

i;1

is

then the principal can

for free (the arguments here are

once again the same as in the case of correlated types). This clearly precludes revenue equivalence.

4.5

Dynamic virtual surplus and pro…t-maximizing mechanisms

In a static setting, the envelope formula allows to calculate the agents’information rents, providing a tool for designing pro…t-maximizing mechanisms. We show here how this approach extends to a dynamic setting. We start by showing how the dynamic payo¤ formula derived in Section 3 permits one to compute expected rents and then show how the latter can be used to derive pro…t-maximizing mechanisms. Suppose that, in addition to the N agents, there is a “principal”(referred to as “agent 0”) who

34

designs the mechanism and whose payo¤ takes the quasilinear form

U0 ( ; x; p) = u0 (x; ) +

N X

pi

i=1

for some measurable function u0 :

X ! R. As standard in the literature, we assume that the

principal designs the mechanism and then makes a take-it-or-leave-it o¤er to the agents in period one after each agent has observed his …rst-period type.24 We restrict the principal to o¤er a mechanism that all the agents accept in equilibrium (this is actually without loss of generality, as long as the agents’outside options can be replicated as part of the mechanism). The requirement that all agents accept the mechanism gives rise to participation constraints in period 1. In addition, agents might have the ability to quit the mechanism at later stages, which may give rise to participation constraints in subsequent periods. However, the principal can always relax all the participation constraints after the initial acceptance decision by asking each agent to post a bond when accepting the mechanism; this bond is forfeited if the agent quits the mechanism, else is returned to the agent after period T .25 Thus, we can restrict attention to the participation constraints that each agent faces at the moment he is being o¤ered the mechanism. This constraint requires that each agent’s value function in the mechanism upon observing his …rst-period type be at least as high as the payo¤ the agent obtains by refusing to participate in the mechanism (i.e. his reservation payo¤). For simplicity, we assume that reservation payo¤s are equal to zero for all agents and all types. The participation constraints can then be written as V

i[

; ]

(

i1 )

0 for all i, all

i1

2

i1 :

(14)

DEF of RELAXED PROGRAM MUST BE CHNGED: THE CURRENT ONE DOES NOT LEAD TO EXPECTED DYNAMIC VIRTUAL SURPLUS.

The principal designs a mechanism to maximize her expected payo¤ subject to the agents’incentive compatibility and participation constraints. While this problem appears quite complicated, 24

If the principal could approach the agents at the ex-ante stage, before they learn their private information, she could extract all the surplus and hence she would implement an e¢ cient allocation rule. 25 The possibility of bonding relies on the following assumptions: (a) unrestricted monetary transfers (in particular, unlimited liability); (b) quasilinear utilities (which rules out any bene…t from consumption smoothing); and (c) continuation utilities in the mechanism being bounded from below and continuation utilities from quitting being bounded from above. If these assumptions are not satis…ed, one has to consider participation constraints in all periods, which makes the analysis considerably harder.

35

it can be simpli…ed by …rst setting up a “Relaxed Program”that contains only a subset of the constraints, and then providing conditions for a solution to the Relaxed Program to satisfy all of the constraints. In particular, the Relaxed Program replaces all the incentive-compatibility constraints with the local incentive-compatibility conditions embodied in the period-1 dynamic payo¤ formula derived in Section 3. Speci…cally, assuming for simplicity that the distributions satisfy Assumption 7, according to Proposition 2, ex-ante IC for agent i implies that V

i[

; ]

(

i1 )

is Lipschitz continuous, and for a.e. i1 , " T # X ~ @u ( ; x ~ ) @V i [ ; ] ( i1 ) T [ ]j i i1 =E i Ji1 (~i ; x ~i 1 ) : @ i1 @ i1

(15)

=1

As for the participation constraints, the Relaxed Program considers only those for the lowest types i1 .

Using the value functions of the lowest types and (15), we can calculate the agents’information

rents and then express the principal’s expected payo¤ as the di¤erence between the expected total surplus and the sum of the agents’expected information rents. Lemma 3 Suppose the environment is quasilinear, that types are independent, and that for each i = 1; :::; N , the assumptions of either Proposition 2 or Proposition 4 hold, and

i1

>

1. Then

the principal’s expected payo¤ in an IC mechanism h ; i equals E

[ ]

[U0 (~; (~); (~))] = "N X ui (~; (~)) E [ ]

N X

i=0

where

i1 ( i1 )

fi1 (

i1 )=(1

Fi1 (

T X 1 @ui (~; (~)) t ~ Ji1 ( ; (~)) ~ @ it i1 ( i1 ) t=1

i=1

i1 ))

#

N X

V

i[

; ]

(

i1 );

i=1

is the hazard rate of the distribution Fi1 .

Proof. Ex-ante IC implies that, for each i = 1; ::::; N , agent i’s ex-ante equilibrium expected payo¤ in the mechanism must coincide with the expectation of the value function. Using (15) and integrating by parts, we then have that E

[ ]

[Ui (~; (~); (~))] = E

[ ]

=E

[ ]

=E

[ ]

h

V " "

i[

; ]

i ~ ( i1 )

@V i [ 1 ~ @ i1 ( i1 ) T

; ] (~ ) i1 i1

X 1 J1 (~i ; ~ i1 ( i1 ) =1

#

+V

i[

; ]

(

i1 )

# ~; (~)) @u ( i ~ +V i ( )) @ i

i[

; ]

(

i1 ):

Summing over i = 1; :::; N and subtracting the total sum from the expected total surplus yields the expected payo¤ of the principal. 36

The solution to the Relaxed Program is thus obtained by letting the lowest types’participation constraints bind V and choosing an allocation rule

E

[ ]

"

N X

i[

; ]

(

i1 )

=0

for all i;

(16)

that maximizes the expression

ui (~; (~))

i=0

N X i=1

# T X ~ ~ @u ( ; ( )) 1 i t ~ Ji1 ( ; (~)) ; ~i1 ) @ it ( i1 t=1

(17)

which we will henceforth refer to as the “expected dynamic virtual surplus.”Clearly, if the solution to the relaxed program satis…es all the incentive and participation constraints, then it also solves the “Full Program” that consists in maximizing the principal’s ex-ante expected payo¤ among all mechanisms that are ex-ante BIC and satisfy participation constraints (14). We then have the following result. Proposition 7 Suppose the environment is quasilinear, that types are independent, and that for each i = 1; :::; N , the assumptions of either Proposition 2 or Proposition 4 hold, and Suppose there exists an ex-ante BIC mechanism h ; i such that the allocation rule

i1

>

1.

maximizes

the “expected dynamic virtual surplus” (17), the lowest types’ participation constraints (16) bind, and all the participation constraints (14) are satis…ed. Then the following are true: (i) the mechanism h ; i solves the Full Program;

(ii) in any mechanism that solves the Full Program, the allocation rule must maximize the expected dynamic virtual surplus (17); (iii) the principal’s expected payo¤ cannot be increased using randomized mechanisms. Proof. Parts (i) and (ii) follow directly from Lemma 3. As for part (iii), note that, from the perspective of each single agent, a randomized mechanism is equivalent to a mechanism that conditions on the types of some …ctitious agent N + 1. The characterization of the necessary conditions for incentive compatibility in a stochastic mechanism thus parallels that for deterministic ones. Because the principal’s payo¤ under a stochastic mechanisms can always be expressed as a convex combination of her payo¤s under di¤erent deterministic mechanisms, it is then immediate that stochastic mechanisms cannot raise the principal’s expected payo¤. (This point was made in static mechanism design by Strausz, 2006). Of course, Proposition 7 is only useful if one can indeed ensure that a solution to the Relaxed Program satis…es all the incentive and participation constraints. We give some su¢ cient conditions for this in subsection 4.7 below.

37

4.6

Distortions

Here we focus on the Relaxed Program and characterize the distortions that a pro…t-maximizing principal creates relative to the e¢ cient allocation rule. To begin with, we consider a special class of environments in which the expected virtual surplus (17) can be maximized separately for all times and states without the need to solve a dynamic programming problem. This occurs when, in addition to assumption DNOT, the following property holds. Assumption 12 (USEP) Utilities Time-Separable in Decisions: For all i = 0; :::; N and all t = P 1; :::; T , ui (x; ) = Tt=1 uit ( t ; xt ). Recall that, under assumption DNOT, the stochastic process

over

is exogenous and does not

depend on the mechanism. If in addition USEP holds, the Relaxed Program is solved by requiring that for all periods t, for –almost all t

t

2 arg max

xt 2Xt

"

N X

t

, N X

uit ( t ; xt )

i=0

i=1

1 i1 (

i1 )

t Ji1

t i

@uit ( t ; xt ) @ it

#

(18)

It is then easy to compare an allocation rules that satis…es (18) with an e¢ cient allocation rule , where, by de…nition, for all periods t and –almost all t

t

2 arg max

xt 2Xt

"

N X i=0

t

the latter is such that #

uit ( t ; xt ) :

(19)

For simplicity, focus on the case of a single agent: N = 1. First, note that when and either

1t

=

1t

it is optimal to set

or t

1t t 1

= =

1t ,

t 1

t

t 1

t then by construction the information index J11

is bounded

1t

= 0, and so

. Intuitively, when only period-1 participation constraints are

relevant, the principal distorts the decisions only to reduce the agent’s period-1 information rents. With time-separable utilities, distorting the allocations in period t is then useful only to the extent that the type in period t is informationally linked to the type in period one. When the agent’s type in period t coincides with either the highest or the lowest possible type for that period, the informational link disappears, in which case there is no reason to distort the period-t decision. (In t a Markov model, in which J11

t 1

=

t 1 +1 =1 J1

+1 1

, following

1t

=

1t

or

1t

=

1t

distortions

then vanish also in all subsequent periods, since the informational link with period 1 is severed.). It is interesting to contrast this …nding with the conclusions of Battaglini (2005), who studies a single-agent model satisfying USEP and DNOT in which the agent’s type space in each period has only two elements and where the evolution of the agent’s type is governed by a Markov process. In his model, from the moment the agent’s type turn out to be high then the optimal mechanism entails no distortions in all subsequent periods (this result is referred to as Generalized No Distortion at 38

the Top, or GNDT). Furthermore, the distortions that the agent experiences when his type remains low are monotonically decreasing in time and vanish in the limit as T ! 1 (this result is referred

to as Vanishing Distortions at the Bottom, or VDB). As the analysis above suggests, while the result of GNDT is quite robust in models satisfying DNOT and USEP, the result of VDB need not be. In particular, distortions need not be monotonic neither in type nor in time and should not be expected to vanish in the long-run.26 On the other hand, for intermediate values of between the information index, Jit with respect to

it :

i ; xi

1

1t ,

distortions are determined by the interaction

; and the partial derivative of the ‡ow utility uit ( t ; xt )

For example, suppose that, in addition to the aforementioned assumptions, the

following holds. Assumption 13 (FOSD) First-Order Stochastic Dominance: For all i = 1; :::; N and all t = 2; :::; T , Fit

t 1 t 1 it j i ; xi

is nonincreasing in

Note that FOSD implies that Jit

t 1 i ; xi

t 1 it .

0; comparing the Relaxed Program (18) with the

E¢ cient Program (19), one can then see that in the Relaxed Program the principal distorts xt to reduce the partial derivative @uit ( t ; xt ) [email protected]

it .

In the standard case in which xt is one-dimensional

and the agent’s utility uit ( t ; xt ) has the standard single-crossing property, this partial derivative is reduced by reducing xt . Thus, the solution to the Relaxed Program involves downward distortions in all periods t > 1 for intermediate types (and in period t = 1 for all but the lowest type). Intuitively, FOSD means that the type in each period t > 1 is positively informationally linked to the period-1 type. Then, under the single-crossing property, a downward distortion in period t, by reducing the agent’s information rent in period t, then also reduces his information rent in period 1, thus raising the principal’s expected payo¤. This result of downward distortions can be extended to settings that do not satisfy assumption USEP and that have many agents, under the following generalization of the single-crossing property. Assumption 14 (SCP) Single Crossing Property: for each t, Xt is a lattice and for each i = 1; :::; N , ui ( ; x) has increasing di¤ erences in ( i ; x) : The assumption that Xt is a lattice is reasonable with one agent. With many agents, it is reasonable, say, if xt describes the provision of public goods, but it need not hold if xt is the allocation of a private good (see footnote 18 above for both examples). The lattice structure on each Xt induces a product lattice structure on the set X of all (measurable) decision rules. 26

We refer the reader to our companion paper, Pavan, Segal and Toikka, 2008, for a further discussion of the dynamics of distortions in pro…t-maximizing mechanisms.

39

Proposition 8 Let X 0 X

X denote the set of decision rules solving the Relaxed Program and

X denote the set of decision rules maximizing expected total surplus. Suppose that, for all

i = 0; :::; N , assumptions DNOT, FOSD, and SCP hold, and in addition, (i) ui ( ; x) is supermodular in x, @ui ( ;x) @ it Then X 0

(ii)

is submodular in x. X in the strong set order.

Proof. De…ne g : X g ( ; z)

f 1; 0g ! R as E

"

N X

ui (~; (~)) + z

i=0

N X i=1

i1

# T X ~ ~ 1 t ~ @ui ( ; ( )) : Ji1 ( i ) @ it (~i1 ) t=1

Then g ( ; 0) is the expected total surplus and g ( ; 1) is the expected virtual surplus. (Assumption t ( ; x ) does not and that Ji1 i i t ~ 0. depend on xi , which is re‡ected in the formula.) The assumption of FOSD ensures that Ji1 i

DNOT ensures that the stochastic process

[ ] doesn’t depend on

Together with SCP, this ensures that g has increasing di¤erences in ( ; z). Together with (i) and (ii), this ensures that g is supermodular in . The result then follows from Topkis’s Theorem (see, e.g., Topkis, 1998). The result means that if rule

0

_

t

( )=

0( t

)_

0 t

solves the relaxed problem and ( ) is e¢ cient and the decision rule

solves the relaxed problem. In particular, if then

0(

)

0

and

is e¢ cient, then the decision 0

^

t

( )=

0( t

)^

t

( )

are de…ned uniquely with probability one,

( ) with probability one.

Note that condition (ii) in Proposition 8 is a 3rd -derivative assumption. Also note that (i) and (ii) hold trivially when each Xt is a chain (e.g., Xt

4.7

R) and USEP holds.

Su¢ ciency and Robustness

We now turn to su¢ cient conditions for incentive compatibility. As anticipated in the introduction, a complete characterization is evasive because of the multidimensional decision space of the problem. Hereafter, we propose some su¢ cient conditions for a solution to the Relaxed Program to satisfy all of the incentive and participation constraints that we believe can help in applications. First we provide su¢ cient conditions for the participation constraints of all types above the lowest type to be redundant. Proposition 9 Suppose for each i = 1; :::; N , ui ( ; x) is increasing in each

it

and assumption

FOSD holds. Then any mechanism h ; i satisfying the lowest types’participation constraints (16)

and the dynamic payo¤ formula (15) for period one, satis…es all the participation constraints (14).

40

t ( ; ( )) Proof. Under the assumptions in the proposition, Ji1

by (15), V

i[

; ](

i1 )

is nondecreasing in

0 and @ui ( ; x) [email protected]

0, hence,

it

i1 .

Next, consider incentive constraints. In what follows we provide conditions ensuring not only that a mechanism is ex-ante incentive compatible, but that it is also incentive compatible on the equilibrium path. That is, the value function of each agent i at any of his truthful private history hi coincides with his equilibrium expected payo¤: ; ]

i[

V

(hi ) = E

; ]jhi

i[

[ui (~; x ~)

p~i ]:

This stronger version of incentive-compatibility thus guarantees that the allocation rule

is imple-

mentable also under a stronger solution concept such as weak Perfect Bayesian Equilibrium. First observe that, for any given allocation rule , one can construct payment schemes

such

that the resulting utility that each agent obtains in equilibrium (i.e., under truthtelling by all agents) satis…es all the IC-FOC conditions of (15): i.e., at any truthful history hi;t it (

@

it ; hi;t

E

it ( it ; hi;t 1 )

@

it

(Recall that rule

1)

T[ i

i[

=E

; ]j(

T[ i

]j(

1

i ui (~; x ~) p~i is Lipschitz continuous in 2 3 T ~ @ui ~; x X 1) 4 5: Ji1 ~i ; x ~i 1 @ i =t

it ;hi;t 1 )

it ;hi;t

h

T

]jhi denotes the probability distribution on

T i

t 1 t 1 t 1 i ; i ; xi

= it ,

and for a.e.

,

it ,

(20)

X induced by the allocation

when all agents other than i play truthful strategies, agent i’s private history is hi , and agent

i reports truthfully in the future.) To construct these payments, for all i, all ( ti ; xti and all mit 2

it ,

let

[ ] Di ( ti ; ( ti 1 ; mit ); xti 1 )

E

T[ i

]j( ti ;(

t 1 ;mit );xti 1 ) i

"

1

)2

Xit

t i

# ~; x @u ( ~ ) i Jit (~i ; x ~i ) : @ i =t

T X

1

,

(21)

We then have the following result. Lemma 4 Suppose the environment is quasilinear, that types are independent, and that for each i = 1; :::; N , the assumptions of either Proposition 2 or Proposition 4 hold: Let h ; i be any deterministic direct mechanism. Fix a period t. Consider the payment scheme ^ obtained from h ; i by setting for all i and all ^ ( ) = i i

t t 1 i ; xi

i(

E

T[ i

)+

i

2

t 1 t 1 ) i (

t i;

]j( ti ; ti ;xti

1

,

)

h

ui (~; x ~)

, where i

41

~

i

Z

it

[ ]

Di it

t 1 i ;z

;

t 1 i ;z

; xti

1

dz:

Then for all i, and for all truthful private histories hi;t D E the mechanism ; ^ satis…es condition (20).

1

=(

Proof. By construction, for all truthful private histories hi;t E

i[

; ^ ]j(

it ;hi;t 1 )

[ui (~; x ~)

T[ i

p~i ] = E

E = The …rst equality follows from the fact that hi;t the distribution over

T

T i

directly from the de…nition of

Z 1

it

]j( ti ; ti ;xti T[ i

t 1 t 1 t 1 i ; i ; xi )

1 1

=( )

2 Hi;t

1,

t 1 t 1 t 1 i ; i ; xi );

[ui (~; x ~) h t i( i ;

~)]

i(

i

t 1 ~t 1 )) i (

]j( ti ;

t t 1 ) i ;xi

[ ]

t 1 t 1 t 1 i ; z); ( i ; z); xi )dz;

Di ((

in period t

it

is truthful and the fact that

T[ i

] corresponds to

X under truthtelling (by all agents). The second equality follows i

t t 1 i ; xi

. Note that the function D[

]

is measurable and bounded and therefore integrable. Thus the mechanism period t.

t 1

Di

; ; ti 1 ; ; xit 1 E ; ^ satis…es (20) in

Note that the construction achieves the satisfaction of condition (20) in period t by adding to the original payment scheme

i(

) a payment term that depends only on reports up to period t; by

implication, this construction does not a¤ect the agents’ incentives in subsequent periods. Thus, for any given allocation rule

, iterating the construction of the payments backward from period

T to period one yields a mechanism that, in any period, after any truthful history hi;t

1

satis…es

condition (20). Now, using the payments constructed in Lemma 4, we provide a su¢ cient condition for the allocation rule

to be implementable, which is obtained by specializing Proposition 5 to quasilinear

environments. Proposition 10 Suppose the environment is quasilinear, that types are independent, and that for each i = 1; :::; N , the assumptions of either Proposition 2 or Proposition 4 hold. Suppose the mechanism h ; i is IC at any (possibly non-truthful) period t + 1 private history. If for all i, all ( ti ; xti

1

),

[ ]

Di ( ti ; (

t 1 t 1 i ; mit ); xi )

is nondecreasing in mit ; D E then there exists payment rule ^ such that the mechanism ; ^ is IC at (i) any truthful period t

private history, and (b) at any (possibly non-truthful) period t + 1 private history.

Proof. Let ^ be the payment rule that is obtained from h ; i using the construction indicated in the proof of Lemma 4. By construction, ^ preserves the agents’incentives at all period t+1 histories. D E Hence the mechanism ; ^ satis…es condition (iii) of Proposition 5. The payment scheme ^ also D E ensures that, after any truthful private history hi;t 1 = ti 1 ; ti 1 ; xti 1 ; the mechanism ;^ 42

satis…es condition (20) in period t. This establishes condition (i) of Proposition 5 for period t. The [ ]

assumption that Di ( ti ; (

t 1 t 1 i ; mit ); xi )

is nondecreasing in mit then implies that condition (ii)

of Proposition 5 is also veri…ed. The result then follows from Proposition 5. To understand this result intuitively, …x a truthful history

t 1 t 1 t 1 i ; i ; xi

denote agent i’s expected utility at this history as a function of his new type

, and let it

t ( it ; mit )

and his new report

mit . One can think of mit as a one-dimensional “allocation”chosen by agent i in period t. Note that @

t ( it ; mit ) [email protected] it

[ ]

= Di ( ti ; (

t 1 t 1 i ; mit ); xi );

because the mechanism h ; i is IC at any (possibly

non-truthful) period t + 1 history, this follows from the dynamic payo¤ formula (2) applied to the modi…ed mechanism in which agent i’s report of this expression is nondecreasing in mit , then

t

it

is ignored and replaced with the message mit . If

has the single-crossing property (formally, increasing

di¤erences). By standard static one-dimensional screening arguments, the monotonic “allocation rule” mit (

it )

=

it

is then implementable (using payments constructed from the dynamic payo¤

formula using the construction in Lemma 4). The proposition cannot in general be iterated backward, since it assumes IC at all period t + 1 histories but derives IC only at truthful period t histories. This re‡ects a fundamental problem with ensuring incentives in dynamic mechanisms: once an agent has lied once, he may …nd it optimal to continue lying, and it is hard to characterize his continuation strategy. However, the proposition can still be applied to some interesting special cases. In particular, in a Markov environment, an agent’s true past types are irrelevant for incentives given his current type. This implies that IC at truthful histories implies IC at all histories. Then the proposition can be rolled backward to show that the mechanism is IC at all histories. This result also implies that truthful strategies, together with the beliefs over the other agents’ types constructed from the mechanism h ; i as rcpd as indicated in subsection 4.3, form a weak PBE of the mechanism.

The result in Proposition 10 may also turn useful in certain non-Markov environments, as illustrated in subsection 5.2 below. Note that Proposition 10 can also be used to analyze the e¤ects of disclosing to the agents in the course of the mechanism information in addition to the minimal one, as captured by xit . Such disclosure can be captured formally by introducing a measurable space Xitd of possible disclosures ^ it = Xit X d , so that x to agent i in period t, and then considering the extended set X ^it = xit ; xd . it

it

While the payo¤ and the stochastic process describing the eveolution of agent i’s type continues to depend on x ^it only through xit , the role of xdit is to capture the additional information that the mechanism discloses to agent i about the other agents’reports (and hence about the decisions x

it

as well). The result in Proposition 10 can then be extended to this environment by rede…nying [ ]

Di

so that the expectation in (21) is now made conditional on x ^it = xit ; xdit instead of just xit .

Clearly, the monotonicity condition in the proposition is harder to satisfy when more information

43

is disclosed, but it may still be possible. In particular, we can formulate a simple condition on the allocation rule that ensures robustness to an extreme form of disclosure. Namely, suppose that each agent i somehow learns at the beginning of period t (i.e before sending his period-t report) all the other agents’types

i

(note

that this includes past, current and future ones). Formally, this can be captured through a disclosure xdit =

i.

We then say that the mechanism is Other-Ex-Post IC (OEP-IC) if truthtelling remains

an optimal strategy in this mechanism at any history. It turns out that some allocation rules can be implemented in an OEP-IC mechanism, under some additional assumptions. Assumption 15 (PDPD) Payo¤ s Depend on Private Decisions: ui ( ; x) depends on x only through xi . Corollary 1 Suppose the environment is quasilinear, that types are independent, and that for each i = 1; :::; N , the assumption of either Proposition 2 or Proposition 4 hold. Suppose in addition that assumptions DNOT, FOSD, SCP and PDPD hold and that the mechanism h ; i is OEP-IC at any (possibly non-truthful) period t + 1 private history. If for all i and all i

( ) is nondecreasing in (

it ; : : : ; i

then there exists a payment rule ^ such that the mechanism

t 1 i ;

) for all D

;^

E

t,

i;

(22)

is OEP-IC at (i) any truthful

period t private history, and (ii) at any (possibly non-truthful) period t + 1 private history. Proof. Under assumption DNOT, the stochastic process allocation rule

[ ] over

and hence can be written as . Furthermore, because types are independent, then

is the product of each individual agent i’s stochastic process over by

does not depend on the

i,

which henceforth we denote

we then denote by i j the distribution over i given ti : The payment rule ^ is obtained by adapting the construction of Lemma 4 to the situation where i.

For any

t i;

t i

agent i has observed

i

and faces a stochastic process

i

over his own types (which is essentially

a single-agent situation): ^ ( ) = i i

t i;

i

i( ij

= E

Z

) + i ti ; i , where h t ~i ; i ) i u (~ ; i i i; it

[ ]

Di

(

~i ;

i

t 1 t 1 i ; z); ( i ; z);

i

i

i

dz;

it

and where [ ]

Di ( ti ; (

t 1 i ; mit );

i)

E

ij

t i

"

T X

@ui ((~i ; Jit (~i )

=t

44

~ i ); ((mit ; i; t ); @ i

i ))

#

:

Note that, under assumption DNOT, Jit ( ; xi ) does not depend on xi . By FOSD, Jit ( ) SCP, PDPD, and (22), @ui ( ; ((mit ; implies that

[ ] Di ( ti ; ( ti 1 ; mit );

i)

i; t ) ;

i )) [email protected] i

is nondecreasing in mit for all t i

is nondecreasing in mit for all

and all

i.

0. By i.

This

The result then

follows from Proposition 10 applied to this setting. For example, in a Markov environment, backward iteration of the Corollary implies that under its assumptions, any allocation rule that is “strongly monotone”in the sense that each nondecreasing in

t i

t

for any given

i

it

t t i; i

is

(which Matthews and Moore (1987) call “attribute monotonic-

ity”) is implementable in an OEP-IC mechanism, and therefore in an IC mechanism under any possible information disclosure. While it should be clear from Proposition 10 that strong monotonicity is not necessary for implementability, it is particularly easy to check it in applications and it does ensure nice robustness to any kind of information disclosure in the mechanism. Subsections 5.2 and 5.3 provide examples of applications where the pro…t-maximizing allocation rule turns out to be strongly monotone. Remark 2 At this point, the reader may wonder whether we could also ensure robustness to an agent observing his own future types from the outset. This is not likely. Indeed, if agent i observes all of his types from the outset, his IC would be characterized as in a multidimensional screening problem. It is well known that incentives are harder to ensure in this setting. For example, in T X the special case with a single agent with linear utility u ( ; x) = t xt , a necessary condition for t=1

implementability of allocation rule T X

t

0 i

is the “Law of Supply”

t ( i)

0 t

t

t=1

0 for all

0

; 2

:

Because the pro…t-maximizing allocation rules derived in applications typically fail to satisfy this condition, one cannot obtain robustness to the agents’ observations of their own future types “for free.” Thus, while some authors have drawn analogies between dynamic mechanism design and static multidimensional mechanism design problems (see, e.g., Courty and Li, 2000 and Rochet and Stole, 2003), here we highlight an important di¤ erence: signi…cantly more allocation rules are implementable in a dynamic setting in which the agents learn (and report) the dimensions of their types sequentially over time than in a static setting in which they observe (and report) all dimensions at once. Remark 3 The reader may also wonder whether there are simple conditions on the payo¤ s and the kernels that ensure that the allocation rule solving the Relaxed Program 17 is strongly monotone. Unfortunately, any such conditions would have to be very restrictive. Indeed, recall from Subsection 4.6 that in a separable environment (i.e. under USEP) at any period t > 1, the distortion in xit is 45

t ( t ) which need not be monotonic in determined by the information index Ji1 i it it .

is bounded, the distortion is zero at both

it

=

it

and

it

=

it

it ;

and downward at intermediate

Thus, because of this nonmonotonic downward distortion, we can have it ;

t 1 t i i ;

for some

it

>

it .

in particular, when

it

t 1 t it ; i ; i

<

Indeed, it is to ensure that the solution to the Relaxed Program

is implementable that Eso and Szentes (2007) make their Assumption 1 that amounts to requiring 2( that Ji1

i1 ; i2 )

is nondecreasing in

i2 .

However, note that with a bounded type space

assumption can be satis…ed only when the information index is identically zero so that

i1

i2 ,

and

this i2

are independent. In the applications below we will consider AR(k) processes with unbounded type spaces in which case the information indices are constant— this helps ensuring strong monotonicity of the solution to the Relaxed Program.

5

Applications

We now show how the results in the previous sections can be put to work by examining a few applications where the agents’types evolve according to linear AR(k) processes. First, we consider a class of problems where the pro…t-maximizing mechanism takes the form of a quasi-e¢ cient, or handicapped, mechanism where distortions depend only on the agents’ …rst period types. Next, we consider environments where payo¤s separate over time as it is often assumed in applications. Lastly, we consider a setting where agents re…ne their valuations through consumption, as in the case of experience goods.

5.1

Handicapped mechanisms

Consider an environment where in each period the set of feasible allocations is Xt

RN +1 . The

utility to each agent i = 1; : : : ; N (gross of payments) is

ui ( ; x) =

T X

it xit

ci (x) ;

t=1

where ci : R(N +1)T ! R is an intertemporal cost function. The principal’s (gross) payo¤ function is u0 ( ; x) = v0 (x). Note that the cost functions ci and the principal’s payo¤ v0 need not be

time-separable; this permits us to accommodate dynamic aspects such as intertemporal capacity constraints, habit formation, and learning-by-doing.27 The private information of each agent i = 1; : : : ; N is assumed to evolve according to a linear AR(k) process, as in Example 4. The total t ( ; x) are thus independent of ( ; x) and coincide with the “impulse response information indices Ji1 27

What is important for the subsequent result is that (i) each agent’s payo¤ is a¢ ne in his own types and independent of the other agents’types and (ii) that the principal’s payo¤ is independent of .

46

t for the AR(k) process. We assume that the support of the …rst period innovation fucntions” Ji1

"i1 (and hence that of

i1 )

is bounded from below.

In this environment, the expected dynamic virtual surplus takes the form

E

"

v0

(~) +

" T N X X i=1

~it

1 ~ t ~t Ji1 i1 ( i1 ) it ( )

t

~)

it (

ci

t=1

(~)

##

:

Note that the latter coincides with the expected total surplus in a model where the (gross) payo¤ to each agent i is ui ( ; x) and where the (gross) payo¤ to the principal is v^0 ( ; x)

v0 (x)

N X T X

t Ji1

1 i1 ( i1 )xit :

i=1 t=1

This implies that the solution to the Relaxed Program can be obtained by solving an e¢ cient t program where the principal has an extra marginal cost Ji1

1 i1 ( i1 )

of allocating a unit to agent

i in period t. In general this e¢ cient program is a dynamic programming problem. However, in many applications, its solution can be readily found using existing methods. What is important to us is the observation that any solution to the Relaxed Program takes the form of an “Handicapped” e¢ cient mechanism: In period 1 each agent i sends a message mi1 determining his handicaps t Ji1

1 i1 (mi1 ).

The allocations that are implemented in each period are then the e¢ cient allocations

for an environment in which the principal’s payo¤ is v^0 ( ; x) and the agents’payo¤s are ui ( ; x). In particular, note that the distortions to each agent’s allocations are determined only by the t handicaps Ji1

1 i1 (mi1 ).

Proposition 11 In the environment with AR(k) types described above, any solution to the Relaxed Program is implementable in a mechanism that satis…es IC at any truthful histories in periods t

2.

Proof. Suppose types are reported truthfully in period one. Because the decisions that are implemented at any subsequent period t

2 in a handicapped mechanism corresponds to the e¢ cient

decisions for a private value environment in which the principal’s payo¤ is v^0 ( ; x), the agents’ payo¤s are ui ( ; x) and the …rst period decisions are x1 =

(m1 ): Hence, ncentive compatibility

from period t = 2 onward can be ensured using, e.g., “Team payments” (Athey and Segal, 2007) de…ned by i(

)=

X

uj ( ; ( ));

j6=i

for all i, and all : Incentives in the …rst period must be checked application-by-application.28 The environment 28

At period 1 the model where the principal has extra costs is one with interdependent values since these costs depends on the agents’true period-1 types through the hazard rates i1 ( i1 ).

47

considered in the next subsection provides an example where incentive compatibility obtains also at t = 1. In particular, incentive-compatibility in each period can easily be guaranted in the environment considered above if the costs ci

0 for all i (the environment then becomes a special

case of the class considered in the next subsection.)

5.2

Time-Separable Environments

We consider the problem of designing a pro…t maximizing sequence of auctions when buyers’types +1 RN +

follow AR(k) processes. Suppose that a monopolist seller has a set of feasible allocations Xt

available in each period t = 1; : : : ; T . There are N long-lived buyers. Each agent i (with the seller as agent 0) has a utility function of the form

ui ( ; x) =

T X

uit (

it ; xit );

t=1

where

it

evolves according to an AR(k) process as in Example 4, with the seller’s type

contractible. As in the previous subsection, the support of the …rst period type

i1

0t

being

is assumed

bounded from below. Proposition 12 Consider the auction environment with AR(k) values described above. Suppose each buyer i = 1; : : : ; N satis…es the assumptions of Proposition 2. Suppose further that for all buyers i, all periods t, (1) the periodic utility function uit has increasing di¤ erences in (

it ; xit ),

(2) the coe¢ cient

i1 ( i1 )

it

of the AR(k) process is nonnegative, (3) the …rst period hazard rate

is monotone, and (4) the partial derivative Then

@uit ( @

it ;xit ) it

is nonnegative and submodular in (

it ; xit ).

is the allocation rule in a pro…t-maximizing sequence of auctions if and only if for all t,

and [ ]-almost all

t(

Furthermore,

t

t

, (

) 2 arg max u0 ( xt

0t ; x0t )

+

N X

uit (

it ; xit )

i=1

t @uit ( it ; xit ) Ji1 ( ) @ it i1 i1

:

can be implemented in an OEP-IC mechanism using payments constructed as fol-

lows: Fix agent i. For all , let

i(

)=

i1 ( i1 ;

T

i)

+

T X

it ( 1 ; t );

t=2

where for all t

)

2, it ( 1 ; t )

uit (

it ;

it ( 1 ; t ))

Z 48

it

it

@uit (r;

it ( 1 ; (r;

@

it

i;t )))

dr;

and i1 ( i1 ;

T

i)

E

T[ i

Z

"

i1 ; i1 ;?)

]j(

i1

E

T[ i

T ui ((~i ; T i );

"

]jj(r;r;?)

i1

T (~i ; T i ))

T X

~

it ( 1 ; ( it ;

t=2

T X

Ji1

@ui (~i ;

~

T

i ((r; i; 1 );

@

=1

#

i ))

i

#

i;t ))

dr:

Proof. We show that under conditions (1)–(4) the pro…t-maximizing allocation rule can be found by solving the Relaxed Program. Note …rst that by (2) and (4) we can apply Proposition 9 to conclude that participation constraints for types other than the lowest one can be ignored. As the utility functions are time-separable, the relaxed problem can be solved by maximizing virtual surplus “pointwise”for each period-t and type pro…le . This implies that in the statement of the Proposition. It remains to show that

t

satis…es the condition

is implementable in OEP-IC

mechanism. This will in turn imply that any allocation rule maximizing the principal’s pro…t must maximize the virtual surplus for [ ]-almost all

t

.

As a preliminary step, note that by inspection the period-t allocation depends only on the current types

t

and the …rst period types

has increasing di¤erences in ( Thus

it

is increasing in

t i

i1 ; xit )

1.

By (1), (3) and (4), the period-t, state-

and in (

it ; xit )

t

virtual surplus

(for any …xed values of other arguments).

(in the product order) implying that

is strongly monotone.

Assume now that agents other than i are truthful. Suppose further that at each period t, t 1 t 1 ). i ; mi ; x

before sending his message mit , agent i has observed ( ti ;

(We do not repeat

the other agent’s truthful messages.) For all T i we will …rst construct payments of the form PT T T T ) = 2 for any i;1 ; i;t ) that implement i ( ; i (mi ; i ) in periods t i t=2 it (mi1 ; mit ; period t histories. So consider a period t message mit is relevant only for

it

and

2. Given the form of

it .

i

and

i,

agent i’s current

Since type distributions are independent of decisions,

mit doesn’t have any indirect e¤ects on the future types either. Furthermore, the only relevant part of the history ( ti ; from allocation, and

it

t 1 t 1 ) i ; mi ; x

and

it

is (

it ; mi1 ;

only condition on (

problem is a static problem indexed by (mi1 ;

i;1 ;

a static allocation rule indexed by (mi1 ;

i;t ).

i;1 ;

monotone in mit . Thus, by (1) for each (mi1 ; it (

it ; k)

uit (

Repeating the steps for each period t ( ; ), where

i;1 ;

it ;

it (

i;t )

it ; mi1 ; i;t ).

as

it

i;1 ;

determines agent i’s utility i;t ).

Now think of

Thus agent i’s period-t it (mi1 ; mit ;

i;t )

as

By strong monotonicity this allocation rule is

i;1 ;

i;t )

it ; k))

Z

k it can be implemented using payments it

it

@uit (r; @

it (r; k))

dr:

it

2 and each agent i, it follows that for t

is as constructed above, is OEP-IC at any period-t history.

49

i;1 ;

2 the mechanism

Consider now period 1. By (1) uit has increasing di¤erences in ( Fit ( j

t 1 t 1 i ; xi )

are ordered by

t 1 i

it ; xit ).

By (2) the kernels

in the sense of …rst-order stochastic dominance. Also, by

assumption utilities depend on x only through xTi , and the kernels are independent of xTi . And above we showed that

is strongly monotone, and that the mechanism ( ; ) is robustly IC at any period 2 history. Hence Corollary 1 implies that there exists payments ^ such that ( ; ^ ) is

OEP-IC at any (truthful) period 1 history. The construction of the payments is as in the proof of the corollary. By inspection we see that in the pro…t maximizing sequence of auctions, the period-t allocation depends only on the buyers current reports

t

and their …rst period reports

case of periodic utility functions of the linear form uit (

it ; xit )

=

it xit

1.

In the special

the information rents are

independent of the current types giving rise to a handicapped mechanism as de…ned in the previous subsection. The payments listed in the Proposition implement the pro…t maximizing allocation rule with OEP-IC’s. That is, the implementation is robust to each agent i observing all the types (including past, present and future) of the other agents.29 The payments exhibit a particular form of timeseparability. For t > 1 the part

it

of agent i’s payments, which is the part relevant for period t

allocation, depends only on the messages (m1 ; mt ) in the …rst and the current period. Thus these payments can be made, say, in period t. However, the part

i1

of agent i’s payments conditions

on the entire sequence mT i of the other agents’ messages. Thus it has to be done at the very end. Note, however, that by taking expectations over the other agent’s future types in

i1

one

obtains payments that implement the pro…t maximizing allocation rule in a Weak Perfect Bayesian equilibrium, and which can be made already in period 1. Of course, the new payments are still OEP-IC from period 2 onwards. In order to interpret the pro…t maximizing sequence of auctions, we note …rst that in the linear case (i.e., when uit (

it ; xit )

=

it xit )

the implementation is particularly simple. Although not essen-

tial for the argument, we suppose further that there is no allocation in the …rst period. Then taking expectations over the other’s types in

as discussed above gives rise to the following handicapped i h T T mechanism: Each agent i chooses from a menu of 1st -period payments E i [ ]j i1 i1 ( i1 ; ~ i ) indexed by t

i1 .

i1

t This determines his “handicaps” Ji1

1 i1 ( i1 )

in periods t

2. Then in each period

2, a “handicapped” VCG auction is played. (Eso and Szentes (2007) derive this result in the

special case of a two-period model with allocation only in the second period.) This logic extends to nonlinear payo¤s in the sense that in the …rst period the agents still choose from a menu of future plans (indexed by the …rst period type). But now in the subsequent periods the information rent term, and hence also distortions, generally depend also on the current report through the 29 In fact, due to time-separability, in periods t respect to observing own future types.

2 the mechanism is truly ex post IC in that it is robust also with

50

partial derivative

@uit ( @

it ;xit ) it

. However, by inspection of the payments, it still remains true that

intermediate reports (i.e., reports in periods 2; : : : ; t

5.3

1) are irrelevant for the period t allocation.

Learning

The last application we consider pertains the design of an optimal mechanism for a seller facing a buyer who learns his valuation over time through consumption. The problem arises for example in the markets for experience goods (such as prescription drugs) and in expert services (such as chiropractor’s service). There is a seller who in each period t = 1; : : : ; T can produce a service at cost c. There is a single buyer whose valuation for the service is v. Payo¤s are quasilinear and take P P the form of t (pt xt ct ) for the seller and t (xt v pt ) for the buyer, where xt 2 f0; 1g = Xt . Neither the buyer nor the seller knows v. The buyer’s prior belief is that v

is the mean and

The seller believes that with

1;

1

1

but does not know the mean

is distributed on [ 1 ;

1)

1

of the buyer’s prior belief.

according to some absolutely continuous cdf F1

1 ( 1)

> 0. We assume that the hazard rate

1

of F1 is nondecreasing. If the buyer consumes

the service in period t (i.e., if xt = 1), he then receives a signal st = v + "t where "t where

, where

the precision (i.e., the inverse of the variance). The seller knows that the buyer’s

prior belief is Normal with precision F10

N

N (0; 1 ),

is precision of the signal. The noises "t are i.i.d. and independent of v. If the buyer does

not consume in period t, he does not receive any new information about v.30 Given the form of the buyer’s payo¤ and the Gaussian structure of the underlying learning process, the relevant statistic for contracting in each period t is the buyer’s posterior expectation of v, which we denote by

t.

Using the properties of the Normal distribution, the evolution of t can be Pt expressed recursively as follows ((see, e.g., DeGroot, 2004)). For any xt , let xt =1 x denote the number of times the buyer consumed the service in periods 1; : : : ; t. The buyer’s posterior belief about v at the beginning of period t = 1; : : : ; T given xt 1

+

t

and precision period t

t

+ xt

=

t

t

=

t 1

of the period t 30

+

=

=

1

+

+ xt

1 posterior

is then Normal with mean

P

j2f :

: Depending on whether the buyer consumed or not the good in

1, we then have two cases. If xt

then

and

1

1

1

= 0, then

P

j2f :

1

; where

t 1

and the period t

t 1

=

t

=

+ st

t 1 1

=

and

t

+ xt

2

=

+ st 1+

: Note that

1 signal st

t 1:

t 1 t 1

1.

See also Nazerzadeh, Saberi, and Vohra (2008) for a similar environment.

51

t

t

If instead xt

1

= 1,

1

is a weighted average

Thus, before the signal st

1

is

realized, we have that t j(

t 1

; xt

2

; xt

1

= 1)

N

t 1;

( + kxt

)( + kxt

1k

2k

)

;

and t j(

t 1

; xt

2

; xt

These expressions de…ne Markov kernels Ft ( j

1

= 0) =

t 1; x

t 1 ),

t 1:

where the sequence of past allocations

determines the precision.

We …rst show that in terms of payo¤s it is without loss to restrict attention to a subclass of allocation rules. De…nition 12 An allocation rule t(

t

) = 0 implies

s(

s

is a stopping rule if, for all t, all s > t and all

) = 0. The set of stopping rules is denoted X S .

2

,

Lemma 5 Consider the learning environment described above. If h ; i is an ex-ante IC mechD E anism, then there exists an ex-ante IC mechanism ^ ; ^ such that ^ is a stopping rule and the D E expected payo¤ s of both the buyer and the seller under ^ ; ^ are the same as under h ; i. The lemma is similar to the well-known result that in a two-armed bandit problem with one safe arm the optimal strategy is a stopping rule. Given this result, in what follows we restrict attention to stopping rules. Then the only relevant period-t histories are the ones in which the agent has consumed in all the preceding periods. Thus we can replace xt in all the formulas above by t. In particular, before stopping, we have that t+1 j t

N

t;

( + t )[ + (t

Denoting the standard deviation of the period-t+1 posterior by we can then express the kernels as Ft+1 (

t+1 j

t

; xt ) =

t+1

:

1) ]

p

t+1

t

[( +t )( +(t 1) ]

, where

t+1

1=2

is the cdf of the standard

normal distribution. Thus, before stopping, the model satis…es the assumptions of Proposition 2 and the direct information index between any two adjacent periods is simply

Itt+1 (

where for

t+1

)=

@Ft+1 t+1 j t ; xt [email protected] ft+1 ( t+1 j t ; xt )

t

t

[email protected]

t+1

t

t+1

@ =

t+1

1 t+1

t

= 1; t+1

is the density of the standard normal distribution. Since the model is Markovian, It

> t + 1. Hence, before stopping, we have Jt

52

1 for all

0

and t. The Relaxed Program then

takes the form [ ]

max E 2X S

"

T X

t

~ ) ~t

t(

1 ( 1) 1

c

t=1

#

;

where the maximization is over the set of stopping rules X S .

The dynamic virtual surplus for this model is thus analogous to the one for the separable

environment of the previous subsection. However, unlike in that model, we cannot solve here the Relaxed Program by pointwise maximization because it is a stopping problem. Instead, we proceed by backward induction. While it is di¢ cult to get a close-form solution for the optimal allocation rule, it is possible to characterize it partially and get a clean comparison to the e¢ cient allocation rule. 2 X S is a cuto¤ rule if for all t and all

De…nition 13 A stopping rule nondecreasing in

t.

The cuto¤ s are given by zt (

t 1

)

inf

t

2

t

:

t(

t 1

t 1

,

t(

t 1

; t ) is

; t) = 1 .

Proposition 13 Consider the learning environment described above. The following are true: (1) The e¢ cient allocation rule zt (

t 1

)

t 1

zt is independent of

is a cuto¤ rule where for all t and all

zt (

)

t 1 1 ,

zt ( 1 ) is independent of

(3) For all t and all

1,

zt ( 1 )

, the cuto¤

and nondecreasing in t.

(2) The solution to the Relaxed Program is a cuto¤ rule t 1

t 1

where for all t and all

nondecreasing in t, and nonincreasing in

t 1

, the cuto¤

1.

zt . In particular, together with (4) this implies that a pro…t

maximizing monopoly experiments less than what is socially desirable. (4) Both the solution to the Relaxed Program

and the e¢ cient rule

Proof. Part (1). Consider the e¢ cient allocation rule period t payo¤ is xt (

t

c) with

t

are implementable.

. It solves a stopping problem where the

distributed as above. Let vt ( t ) denote the continuation value

from period t onwards, which depends only on the current type given the Markov structure. We have

n vt ( t ) = max 0;

t

h io c + E vt+1 (~t+1 )j t :

(23)

(We are using the conditional expectation notation for convenience; the expectation is actually taken with respect to the kernel identi…ed above.) We proceed by backward induction. At T , for any , the e¢ cient allocation

T(

) solves vT (

Thus

T

T)

= max f0;

has cut-o¤ zT = c, which is independent of

T

cg : T 1

; by implication, vT is nondecreasing.

Suppose then that the properties identi…ed for period T are true for some period t + 1 (That is,

53

t+1

t

has cut-o¤ zt+1 independent of

and vt+1 is nondecreasing). We want to show that the same

properties hold in period t. Given

t

,

t(

t

) solves the maximization problem in (23). Since vt+1 is nondecreasing by

the induction hypothesis and we have FOSD,

t

has a cuto¤ zt which does not depend on

t 1

.

Furthermore, vt is nondecreasing. We conclude that the e¢ cient rule is a cuto¤ rule where the cuto¤s depend only on t. It remains to show that the cut-o¤s zt are nondecreasing in t. By inspection of (23) it su¢ ces to show that vt is nonincreasing in t. To this end we …rst establish by backward induction that the functions vt are convex. This is clearly true of vT . Suppose then that vt+1 is convex. Note that the kernels Ft identi…ed above imply that is independent of vt (

t + (1

t.

t+1

=

t

Thus for any

n ) 0t ) = max 0; = max 0; = max 0;

t,

=

0 t

where ~ t

and

2 ). t+1

N (0;

h c + E vt+1 (~t+1 )j

0 t

t

+ (1

)

0 t

c + E vt+1 (

t

+ (1

)

0 t

c + E vt+1 ( (

c + E vt+1 ( c + E vt+1 (

t

) max 0;

0 t

Note that the distribution of ~ t

2 [0; 1] we have )

t

max 0;

t,

t + (1

max 0;

+(1

+

t t

+ ~t)

t

t + (1

0 t

0 t

+ ~t)

+ ~ t ) + (1

)(

+ (1

t

)

+ (1

)

0 t

)

io 0 t

+ ~ t ))

c + E vt+1 (

0 t

+ ~t)

+ ~t)

c + E vt+1 (

0 t

+ ~t)

)vt ( 0t ):

vt ( t ) + (1

(Of course there are too many intermediate steps now.) Thus vt is convex. Suppose then that for some t, vt t

v for all

t. Note that this holds vacuously for t = T . Consider period

1. For any a 2 R, vt

1 (a)

= max f0; a

c + E [vt (a + ~ t )]g

max 0; a

c + E vt+1 (a + ~ t )

max 0; a

c + E vt+1 (a + ~ t+1 )

= vt (a);

where the …rst equality follows by the induction hypothesis and the second by the convexity of vt+1 , since the distrobution of ~ t+1 second order stochastically dominates that of ~ t . Part (2). Consider the Relaxed Program. Let vt ( t ) denote the continuation value from period t onwards. We have vt ( t ) = max 0;

t

c

1 + E vt+1 ( 1( 1)

54

t+1

)j

t

:

(24)

By backward induction one sees that vt ( t ) depends only on ( 1 ; t ). Thus the solution to the Relaxed Program seller’s cost is c

is an e¢ cient allocation rule in the model parameterized by 1 . 1( 1)

The result in part (1) then implies that

zt ( 1 ) depend only on t and the parameter 1( 1)

1,

1

where the

is a cuto¤ rule, where the cuto¤s

and are nondecreasing in t. Since the hazard rate

is assumed to be monotone, the second term on the right hand side is nondecreasing in

This implies that zt ( 1 ) is nonincreasing in

1.

1.

Part (3). We prove the result by verifying the conditions of Proposition 8. Super- and submodularity (respectively of ui ( ; x) and of @ui ( ; x) [email protected]

it )

are satis…ed since the payo¤s are time-

separable and the ‡ow payo¤s are linear. By inspection so is SCP. We also have FOSD since

t

follows a nonstationary random walk. DNOT obtains since given the restriction to stopping rules, for any nontrivial history (i.e., where selling hasn’t yet stopped) the distributions depend only on t. Finally, the set of stopping rules is seen to be a lattice as follows: De…ne the pointwise order on X S by setting

0

if for all t, all

t

,

t(

t

0 ( t ). t

)

It is then straightforward to verify that

the meet and the join of any two stopping rules are stopping rules. The result then follows from Proposition 8. Part (4). Implementability of each of the two rules follows from Proposition 9 and Corollary 1 since both rules are clearly strongly monotone. Other assumptions are veri…ed as in the proof of part (3). That the pro…t-maximizing cuto¤s are increasing in t is due to the fact that the option value

of learning is decreasing in the number of times the service has been provided. First, the impact of each new signal on the buyer’s posterior belief declines with the number of signals received in the past. Second, as the remaining horizon gets shorter, the seller will reap the bene…ts from high valuations in fewer periods. Perhaps more interestingly, the cuto¤s in the pro…t-maximizing allocation rule depend on the buyer’s …rst-period type. This implies that the optimal selling mechanism cannot be implemented with a sequence of prices. Actually, even history-dependent prices fail to implement the optimal mechanism. In fact, what is essential is to condition the prices not on the purchase history xt on the …rst period type

1.

1

but

This can be done by o¤ering the buyer a menu of contracts, where each

contract corresponds to a di¤erent price path. Because the optimal cuto¤s are increasing in time, so are the prices in each path. To build demand, the monopolist thus optimally o¤ers “introductory rates,” or “discounts,” that expire after the service has been provided for a few periods.

References Angus, J. E. (1994): “The Probability Integral Transform and Related Results,” SIAM Review, 36(4), 652–654. 55

Athey, S., and I. Segal (2007): “An E¢ cient Dynamic Mehcanism,”Mimeo, Stanford University. Baron, D. P., and D. Besanko (1984): “Regulation and Information in a Continuing Relationship,” Information Economics and Policy, 1(3), 267–302. Battaglini, M. (2005): “Long-Term Contracting with Markovian Consumers,” American Economic Review, 95(3), 637–658. Courty, P., and H. Li (2000): “Sequential Screening,” Review of Economic Studies, 67(4), 697– 717. Cremer, J., and R. P. McLean (1988): “Full Extraction of the Surplus in Bayesian and Dominant Strategy Auctions,” Econometrica, 56(6), 1247–1257. DeGroot, M. H. (2004): Optimal Statistical Decisions. Wiley-IEEE. Doepke, M., and R. M. Townsend (2006): “Dynamic Mechanism Design with Hidden Income and Hidden Actions,” Journal of Economic Theory, 126(1), 235–285. Dudley, R. M. (2002): Real Analysis and Probability. Cambridge University Press. Ely, J. (2001): “Revenue Equivalence Without Di¤erentiability Assumptions,” Mimeo, Northwestern University. Eso, P., and B. Szentes (2007): “Optimal Information Disclosure in Auctions and the Handicap Auction,” Review of Economic Studies, 74(3), 705–731. Garcia, D. (2005): “Monotonicity in Direct Revelation Mechanisms,” Economics Letters, 88(1), 21–26. Gershkov, A., and B. Moldovanu (2007): “The Dynamic Assignment of Heterogenous Objects: A Mechanism Design Approach,” Discussion Paper, University of Bonn. Matthews, S., and J. Moore (1987): “Monopoly Provision of Quality and Warranties: An Exploration in the Theory of Multidimensional Screening,” Econometrica, 55(2), 441–467. Mezzetti, C. (2004): “Mechanism Design with Interdependent Valuations: E¢ ciency,” Econometrica, 72(5), 1617–1626. Milgrom, P., and I. Segal (2002): “Envelope Theorems for Arbitrary Choice Sets,” Econometrica, 70(2), 583–601. Mirrlees, J. A. (1971): “An Exploration in the Theory of Optimum Income Taxation,” Review of Economic Studies, 38(114), 175–208. 56

Myerson, R. B. (1986): “Multistage Games with Communication,”Econometrica, 54(2), 323–58. Pancs, R. (2007): “Optimal Information Disclosure in Auctions and Negotiations: A Mechanism Design Approach,” Mimeo, Stanford University. Rochet, J.-C., and L. Stole (2003): “The Economics of Multidimensional Screening,” in Advances in Economics and Econometrics: Theory and Applications - Eight World Congress, ed. by M. Dewatripont, L. P. Hansen, and S. J. Turnovsky, vol. 1 of Econometric Society Monographs, pp. 150–197. Cambridge University Press. Rudin, W. (1976): Principles of Mathematical Analysis. McGraw-Hill, 3 edn. Stokey, N. L., and R. E. Lucas, Jr. (1989): Recursive Methods in Economic Dynamics. Harvard University Press. Strausz, R. (2006): “Deterministic versus Stochastic Mechanisms in Principal-Agent Models,” Journal of Economic Theory, 127(1), 306–314. Topkis, D. M. (1998): Supermodularity and Complementarity. Princeton University Press.

Appendices A

Statement and proof of Lemma A.1

Lemma A.1. Assume the environment satis…es Assumption 2. Then Assumption 5 implies that for any t; and any

9B < +1 :

t 1

; yt

1

]

B

t 1

8(

; yt

1

):

Proof of Lemma A.1. Assumption 5 implies that @ @

Z

t dFt ( t j

t 1

;y

t 1

) =

R

lim 0 !

= =

lim 0 Z

!

t d[Ft ( t j

Z

Ft ( t j

@Ft ( t j t @

1

t 1

0

;

; yt

1)

0 t 1

; yt

;

0

; yt

1) 0

1)

d

Ft ( t j Ft ( t j

t 1

t 1

;

;

; yt ; yt

1 )]

1)

d

t

;

t

The second inequality follows by Lemma 6 below. The last equality follows by the dominated convergence theorem since the integrand is bounded for all 57

t

by the integrable function Bt ( t ).

Furthermore,

Z

@Ft ( t j t @

1

R

from which the claim follows by taking B

B

; yt

1)

d

Z

t

B( t )d t ;

B( t )d t :

Proof of Proposition 1

Two kinds of period-t histories appear frequently in the proof. Those including the message mt but excluding the realization of yt , and those including the current type

but excluding the message

t

mt . For expositional clarity we introduce notation to distinguish the value functions associated with these two types of histories. For the …rst kind, we let

t

t

; mt ; y t

V ( t ; mt ; y t

1

1)

denote the the supremum continuation expected utility. For the second kind, we continue to use the value function V

but in order to clarify notation further we drop the superscript

subscript. Thus we write Vt ( t ; mt

V ( t ; mt

1; yt 1)

period T +1 as a notional device and then let

T +1

1 ; y t 1 ).

T +1

and add a time

Also, it is convenient to introduce

; mT +1 ; y

T +1

= VT +1

; m; y = U ( ; y).

Note that by de…nition t

Vt+1

t

t

;m ;y

t+1

t 1

=

Z

Vt+1

; mt ; y t = sup mt+1

t+1

; mt ; y t dFt+1

t+1

t+1

t+1 j

t

; yt d

t

yt jmt ; y t

1

;

(25)

; mt ; mt+1 ; y t :

The proof proceeds in a series of Lemmas. Lemma 6 For any Lipschitz function G : Z

G( t )dFt ( t j

t 1

;y

t 1

)

Z

t

G( t )dFt ( t j =

! R, t 1

Z

; yt

1

)

G 0 ( t ) Ft ( t j

t 1

; yt

1

)

Ft ( t j

t 1

; yt

1

) d t;

where all the integrals exist. Proof. First note that the …rst two integrals exist, since letting M be the Lipschitz constant for G, and picking any ^t 2 t , we can write jG ( t )j G ^t + M ^t + M j t j, and all terms have expectations with respect to the probability distribution dFt

58

tj

t 1

; yt

1

, the last term by

Assumption 2. Thus, we can use integration by parts to write Z

When both Ft

tj

t

+ G ( t ) Ft

tj

t 1

are …nite, we have Ft

tj

t 1

; yt

1

= 0, and the Lemma follows. If

t

=

and

t 1; yt 1

Z G ( t )dFt t j t 1 ; y t 1 G ( t ) dFt t j t 1 ; y t 1 Z = G( t )d Ft t j t 1 ; y t 1 Ft t j t 1 ; y t 1 Z Ft t j t 1 ; y t = G 0 ( t ) Ft t j t 1 ; y t 1

t

G( t ) Ft

tj

t 1

; yt

1

Ft

t 1

tj

(jG(^t )j + M j^t j) Ft ( t j + M j t j Ft

t 1

tj

; yt

jG(^t )j + M j^t j Ft Z +M jzj dFt zj z

; yt

t 1

1

Ft = Ft

; yt

1

; yt

1

)

+ Ft

tj

tj

t 1

1

;y

t 1

tj

; yt

; yt

t 1

+

Ft ( t j

t 1

Z

z

t

t 1

; yt

1

Ft

tj

t

t

t

t= t t= t

; yt

1,

1

tj

)

; yt

1

t 1

jzj dFt zj

:

= 1 and Ft

!

t 1

d

1

t 1; yt 1

1, then as

1

t 1

tj

1

; yt

1

!0 by Assumptions 2 and 5. Similarly, if G( t ) Ft

tj

t 1

; yt

t

1

= +1; then as Ft

jG(^t )j + M j^t j jFt + M j tj

1

Ft

tj

t 1

; yt

t 1

tj

jG(^t )j + M j^t j Ft Z +M jzjdFt zj z

t 1

tj

; yt

tj t 1

t 1

; yt

1

; yt

1

! +1,

t

1

Ft

+ 1

; yt 1

1

+

Ft Z

z

t

Ft

t

tj tj tj

t 1

; yt

t 1 t 1

1

; yt

; yt

jzj dFt zj

j 1

1

t 1

; yt

1

!0 by Assumptions 2 and 5. For any function G :

! R, let

G @ G( ) = lim sup 0 @ t t" t

0 t;

G( )

t 0 t

and

t

59

G @+ G ( ) = lim inf 0 @ t t# t

0 t;

G( )

t 0 t

t

:

t

; yt

1

=

The following Lemma is similar to Theorem 1 of Milgrom and Segal (2002) and Theorem 1 of Ely (2001). Lemma 7 In an ex ante IC mechanism histories

1

;

@ V

; @

1

;y

1

;y

, for any integers 1

1

@

;

1

;y

@

t

@+ V

and

By de…nition of V and

1

;

1

;y

, we have for all 0 t;

V

1

;

t

=

1

;y

0 t;

Taking

0 t

t,

>

;

1

1

;y

0 t

dividing by

in the lemma. Taking

0 t

<

t,

1

;

V t,

1

;y

1

1

and then taking liminf as 0 t

dividing by

t,

0 t

1

;y

:

t

,

0 t, 1

1

;y

;

t

1

;y

;y

1

; 0 t;

;

.

;

t

@+ @

and all

Combining the two we have for [ ]-almost all histories t

[ ]-almost all

t

;y

1

;y

1

;y

;

; 1

;

1

; @

t

V

0 t;

and for

,

Proof. By ex ante IC we have for [ ]-almost all histories

V

t

and all

t

0 t,

1

;y #

:

;

;y

1

.

yields the second inequality

and then taking limsup as

inequality in the lemma.

0 t

"

t

yields the …rst

The next two lemmas don’t rely on IC. Lemma 8 For each t,

t

t

i.e., there exists M such that for all t t

Vt

t

; mt

t

;

1

; yt

T

continuous in in

t

and

t 1

; mt

1; yt

are equi-Lipschitz continuous in

t

t

Vt T +1

t

; mt ; y t

; mt

T +1

1

; yt

M

t

t

;

M

t

t

:

; mT +1 ; y T = U

by Assumption 4. Now we show that for any t, if t

, then Vt

t

; mt

1; yt 1

t

—

t ; mt ; y t ,

; mt ; y t

Proof. By backward induction on t. tinuous in

t

; mt ; y t and Vt

and

t 1

, respectively.

60

t 1

; mt

t

1; yt 2

T

; y T is equi-Lipschitz con-

t

; mt ; y t

1

is equi-Lipschitz

are equi-Lipschitz continuous

Indeed, suppose Vt ( t ; m t

1

; yt

1

t

t

; mt ; y t

Vt ( t ; m t

)

1

1

; yt

1

)

sup t

t

and so Vt is also equi-Lipschitz continuous in

sup yt

1

sup yt

1

t 1

Z

Z

; mt

; yt

t 1

Vt

Z

t

;

;

; mt

1

; mt

t

t 1

t

t 1

t 1

t

;m

t 1

2

;

t 1

Vt

+ sup Vt yt 1 Z Vt sup yt

1

;y

1

t 1

; mt

; yt

1

1

; yt

1

dFt 1

dFt ; yt

; mt

; yt

tj

1

t 1

;y

; yt ;

; mt ); y t

1

; yt

t

Vt

; mt

t

Vt

; mt Z

;

Z

1

t 1

1

; mt

; yt

1

yt

1

Z

Ft

tj

t 1

M

t 1

t 1

;y

t 1

1+

Z

Ft

tj

t 1

t

,

Bt ( t ) d

;y

t

@Vt

t 1

t

t(

; (mt

1

; mt ); y t

1

)

; mt

dFt 1

; yt

1

; mt 1 ; y t @ t

tj

1

t 1

dFt tj

t 1

d

t

; yt

; yt

1

dFt

t 1

tj

; yt

1

1

tj

t 1

; yt

1

; yt

1

1

t 1

is equi-Lipschitz continuous in

.

Lemma 9 For any integers ; t such that 1 1

1

@

@+

)

t

dFt

where we used Lemma 6 and Assumption 5. This shows that

@

;

1

+ sup

t 1

1

;

1

t

t 1

t 1

Vt

t

1

2

t 1

Vt

; (mt

with a constant M . Then

. But then, using (25),

t 1

tj

t

t(

mt

M

t 1

t

is equi-Lipschitz continuous in

1

1

@

;m

1; y

2

t

;m t

1; y

2

Z Z

Z

Z

@ V

;m @ t

t<

1

T , and any

1; y

;m

1; y

2

;

1

dF

1

j

1

;y

d

1

y

1 jm

1

;y

2

(26) @V @+ V

; m 1; y @ ; m 1; y @ t

1

@F

j @

1

;y

1

d d

1

y

t

1

1 jm

;y

2

;

2

:

1

dF

1

j

1

;y

d

1

y

1 jm

1

;y

2

(27) @V

;m @

1; y

61

1

@F

j @

1 t

;y

1

d d

1

y

1 jm

1

;y

0 t

Proof. Using (25), write for any

=

+

1; y

;m

t

2

0 t;

V

t

1

1

;m

1

;y

d

1; y

;m

"

1; y

;m

V

0 t

V

1

2

t

1; y

;m

t

1

0 t

Z Z

1

0 t;

1

6=

1

dF

F

1

0 t;

j

1

;y

t

F

0 t

1

j

t

1

j

1

;y

d #

1

;y

t

y

1

1

1 jm

2

;y

(28) d

1

y

1 jm

1

2

;y

(29) +

Z

0 t;

V

1; y

;m

t

1 0 t

V

1; y

;m

1

d F

t

1

0 t;

j

1

;y

t

F

1

j

1

;y (30)

d

1

y

1

1 jm

;y

2

:

We examine separately the behavior of each of the three integrals as

0 t;

V

t

1; y

;m

!

t:

1;

(30): Note that for any y Z

0 t

1

V

0 t

1; y

;m

1

d F

1

0 t;

j

t

t

1

;y

F 0 t

! 0 as

1

j !

1

;y

t;

since the integrand is bounded by Lemma 8, and the total variation of the measure d F

j

1

0 t;

t

1

;y

F

1

j

1

;y

converges to zero by Assumption 6. Thus, (30) is bounded in absolute value by a term that converges to zero as

0 t

!

t: 0 t;

(Note that in the Markov case, V Ut

t; y

t

does not depend on

;m

t

1; y

1

V

;m

1; y

1

= Ut

0 t t; y

so (30) equals zero without imposing Assumption 6.)

(29): Using Lemma 8 and Lemma 6 it can be expressed as Z

F

j

1 t

; 0t ; y

1

F

0 t

t

j

1

;y

1

@V

;m @

1; y

1

d d

1

y

1 jm

1

;y

Using in addition Assumption 5, the Dominated Convergence Theorem establishes that as 0 t

!

t,

the integral converges to the 2nd integral in (27) and (26).

62

2

:

(28) Taking its limsup as

0 t

"

and using Fatou’s Lemma,31 we see that the limsup is

t

bounded above by the 1st integral in (26). Thus, we obtain (26). Similarly, taking the liminf 0 t

of (28) as

#

t

and using Fatou’s Lemma, we see that the liminf of this term is bounded

below by the 1st integral in (27), so we obtain (27).

Now combining the inequalities in Lemma 9 for m = 1

obtain for [ ]-almost all histories

@ V

1

1

@

@+ V

;y

Z

2

t

1

1

2

;

@

2

;

;y

@ V

Z

@V

Z

2

t

2

; @

1 t 1

; @ ; @ ; @

@+ V

Z

;

@V

t 1

dF 1

1

j

@F

1

j @

1

;y

d

y

1

1

1j

2

;y

1

;y

d d

1

y

1

1j

t

;y

2

;

2

:

1

;y

dF 1

;y

,

1

;y

;y 1

2

;y

and the inequalities in Lemma 7 we

1

j

@F

1

j @

1

;y

d

y

1

1

1j

2

;y

1

;y

d d

y

1j

@U

T

1

t

1

;y

Furthermore, we have by de…nition of VT +1 , T +1

@ VT +1

@

T

;

; yT

=

@+ VT +1

T +1

@

t

;

;

t 1

; yt

@ Vt

E

[ ]j( t ;

t

; @

t 1

1

T +1

@VT +1

=

@

T

;

; yT

=

@

t

; yT

:

t

= t + 1; t + 2; :::; T + 1 yields for [ ]-almost all

the double inequality

t 1

; yt

t ;y t

; yT

t

So iterating the above inequalities forward for t

T

1)

2 4

1

T @U ~ ; y~T

@

Z @V T X

t

~

1

;~

;

1

1

; y~

@

=t+1

1

j~

@F

@ @+ Vt

; y~

t t

; @

t 1

3

1

; yt

d 5 1

:

t

To complete the proof of the proposition, recall that by de…nition, Vt

t

;

t 1

; yt

1

=V

31

t

;

t 1

; yt

1

:

Note that even though the integrand need not be nonnegative, it is bounded in absolute value by the lipschitz constant M . Thus, in general we may have to add and subtract M from the integrand before applying Fatou’s lemma.

63

t

So by Lemma 8 V any

t 1

t 1

;

; yt

1

t 1

is Lipschitz continuous in t for all t 1 ; t 1 ; y t 1 . Thus, given @V ( t ; t 1 ;y t 1 ) , the partial derivative exists for almost every t . Whenever @ t ;

; yt

1

it does, it equals to both ends of the above double inequality and so (ICFOC) obtains.

C

Other Proofs Omitted in the Main Text The initial steps of the proof are in the main text. Here we simply

Proof of Proposition 4.

prove that, under the assumptions in the proposition, the formula in (10) reduces to the one in (2). Di¤erentiating the identity32 s 1

Fs (Fs 1 ("s j with respect to 0 = fs (

s

j

t,

s 1

; ys

1

)j

s 1

; ys

1

) = "s :

t < s; we have that for a.e. "s ;

; ys

1

)

s =Fs

1

s 1

("s j

;y s 1 )

@Fs 1 ("s j s 1 ; y s 1 ) @ t @Fs ( s j s 1 ; y s + @ t

1)

; s =Fs

1

("s j

s 1

;y s

1)

from which we obtain that @Fs

1 ("

s

@

s 1

j

; ys)

@Fs (

=

fs (

t

s

s

j

s 1

s 1

j

;y s

1)

t

s =Fs

; ys

1)

1

s =Fs

s 1

("s j 1

("s j

;y s

s 1

1)

:

;y s 1 )

It follows that Asj ("s ; y s

1

)=

@Fs ( s j s 1 ; y s 1 )[email protected] fs ( s j s 1 ; y s 1 )

j s

=z s ("s ;y s 1 )

Ijs ( s j

s 1

; ys

1

)

s =z s ("s ;y s 1 )

:

We conclude that @zs ("s ;y s @"t @zt ("t ;y t @"t

1) 1)

= Its ( s j

s 1

s + It+1 ( sj

; ys

1

s 1

= Jts (z s ("s ; y s

1

)

s =z s ("s ;y s 1 )

; ys

1

); y s

1

)

s =z s ("s ;y s 1 )

Itt+1 (

t+1 j

t

; yt)

t+1 =z t+1 ("t+1 ;y t )

+ :::

):

32 Note that the di¤erentiability of Fs ( s j s 1 ; y s 1 ) with respect to t , t < s, follows from the assumptions in the proposition. This can be seen from the implicit function theorem applied to the identity Fs 1 (Fs ( s j s 1 ; y s 1 )j s 1 ; y s 1 ) = s :

64

Rewriting (10) as @V (z t ("t ; y t

1 ); z t 1 ("t 1 ; y t 2 ); y t 1 )

@ E

^ [ ^ ] j"t

t ^t 1 ;h

2 4

=

@U (z T (~"T ; yeT 1 ); yeT ) @

T X

+

t

s=t+1

0 @

@zs (e "s ;e ys @"t @zt ("t ;y t @"t

1) 1)

1 A

@U (z T (~"T ; yeT 1 ); yeT ) @

s

3 5

we then have that @V (z t ("t ; y t

E

^t ^ [ ^ ] j"t ;h

1 ); z t 1 ("t 1 ; y t 2 ); y t 1 )

1

"

@

=

t

@U (z T (~"T ; yeT @ t =E

1 ); y eT )

[ ]jz t ("t ;y t

+

T X

@U (z Jts (z s (~"s ; y s 1 ); y s 1 )

s=t+1

1 );z t 1 ("t 1 ;y t 2 );y t 1

"

T (~ "T ; yeT 1 ); yeT )

@

s

T X @U (~; ye) s + Jts (~ ; y s @ t s=t+1

1

#

# @U (~; ye) ) ; @ s

which is the same formula as in (2). Proof of Proposition 5. By (iii), it su¢ ces to consider only single-stage deviations in period t, i.e., deviations to some report mt followed by truthtelling from t + 1 onward. Thus, it su¢ ces to verify that the agent’s period-t payo¤ expectation from such a deviation at any truthful history any current type

t,

t 1

;

t 1

; yt

1

and at

which is given by t 1

( t ; mt ;

; yt

is maximized by reporting mt =

t.

1

)

E

t 1

[ ]jj(

; t );(

t 1

;mt );y t

1

[U (~ y ; ~)];

For this purpose, the following lemma is useful. (A similar

approach has been applied to static mechanism design with one-dimensional type and multidimensional decisions but under stronger assumptions— see Garcia, 2005.) Lemma 10 Consider a function in (

for all m, (b) 0(

)

( )

@ ( ; m)[email protected] ) (

: ( ; )2 ! R. Suppose that (a)

( ; ) is Lipschitz continuous in , and (c) for any m, for a.e. m)

0. Then

Proof of the Lemma: Let g( ; m) continuous in

( ; m) is Lipschitz continuous

( )

( ; m) for all ( ; m).

( )

( ; m). For any …xed m, g( ; m) is Lipschitz

by (a) and (b). Hence, it is di¤erentiable a.e. in , and g( ; m) =

Z

m

,

@g(z; m) dz = @

Z

65

m

0

(z)

@ (z; m) dz: @

By (c), the integrand is nonnegative for a.e. z g( ; m)

0 for both

m and

m and nonpositive for a.e. z

m. Therefore,

m.

Now, to apply the Lemma, we interpret ( t ; mt ; t from truthtelling in the mechanism ^ constructed from

1

; yt

1)

as the agent’s expected utility

by ignoring the agent’s report in peAssumption (iii) means that the mechanism ^ is IC at

riod t and substituting mt instead.

any history in period t, and therefore ( t ; mt ; t 1 ; y t 1 ) is the agent’s value function in the mechanism. Applying to ^ the result in Proposition 2, or equivalently in Proposition 4, we have that, for any mt , t

D

t 1

;

( t ; mt ; ; yt 1

; mt

t 1

a.e.

; yt t.

1)

is Lipschitz continuous in

t

and @ ( t ; mt ;

t 1

; yt

1 )[email protected]

t

=

The former property establishes assumption (a) in the Lemma.

Assumption (i) in the proposition establishes assumption (b) in the Lemma and, together with assumption (ii) in the proposition, it establishes assumption (c) in the Lemma. The Lemma then implies that

( t ; mt ;

t 1

; yt

1)

is indeed maximized by reporting mt =

t

which implies that

is

IC at any truthful period-t history. Let

Proof of Proposition 6.

i[

; ] and

that agent i faces respectively under h ; i and and V

i[

;^]

; ^ ] denote the randomized direct mechanisms E ; ^ , as de…ned above. Let V i [ ; ] : Hi ! R

[ iD

: Hi ! R denote the corresponding value functions.

We …rst establish the following result.

Lemma 11 Suppose the assumptions in Proposition 6 hold. Then, for private histories

hti 1 ;

there exists a scalar V

T[ i

i[

; ]

~it ; h ~t

1

Kit (hti 1 ) V

i

From Lemma 1, the fact that h ; i and ]-almost all truthful private histories

D

i[

;^]

such that ~it ; h ~t i

E

;^

hit 1

[ ]–almost all truthful

1

= Kit (hti

1

):

(31)

are ex-ante BIC implies that they are BIC at

t 1 t 1 t 1 i ; i ; xi

, for any i and any t

1. Iterat-

ing (IC-FOC) backward (or alternatively using (6)) and using the result in Proposition 1 (alternatively, the result in Proposition 3), then implies that, under quasi-linearity, for any t almost all truthful private histories hit and V

i[

;^]

; hti

1

are Lipschitz continuous in @V

i[

t 1 it ; hi

; ]

@ It follows that for

t 1 t 1 t 1 i ; i ; xi

1

=

it

@V

i[

i[

; ]

T [ ]– i ; hti 1

and ;^]

@

it

the value functions V

1 and

t 1 it ; hi

a.e.

it :

it

[ ]–almost all truthful private histories hti

1

, there exists a scalar Kit (hti

1

)

such that the condition in (31) holds. The result for t = 1 then follows directly from this lemma by letting Ki = Ki1 (h0 ), where h0 is

66

the null history, and noting that, in any ex-ante BIC mechanism, with probability one, the value fnction coincides with the expected payo¤ under truthtelling. The proof for the second result in the proposition is by induction. Suppose there exists a Ki 2 R

such that

when

=t

~i ; h ~ i

; ]

i[

E [V

j ~i ]

1

;^]

i[

E [V

1: We then shoch that (32) holds also

i[

t 1 it ; hi

; ]

=E

T[ i

]jj

j ~i ] = Ki

1

(32)

= t + 1:

First note that for [ ]–almost all private histories ( V

~i ; h ~ i

t 1 it ;hi

t 1 it ; hi );

[V

i[

; ]

t it+1 ; hi

]:

By the law of iterated expectations, we then have that E

[ ]j

[V

i[

; ]

t j ~i ] = E

~it ; h ~t

1

t j ~i ]

E

[ ]j

t j ~i ]

E

[ ]j

~

with m ~ it = ~it .

i

[ ]j

[V

i[

; ]

t j ~i ]

~i;t+1 ; h ~t i

It follows that E

[ ]j

= E

[ ]j

[V

i[

; ]

[V

i[

; ]

~ t = (h ~t where h i i

1

~it ; h ~t

1

i

~i;t+1 ; h ~t i

; ~it ; m ~ it ;

~ it ; it (m

i;t ))

[V

i[

;^]

t j ~i ]

1

i

;^]

i[

[V

~it ; h ~t

~i;t+1 ; h ~t i

(33)

t j ~i ] = E

[ ]j

~t [Ki;t+1 h i

t j ~i ];

Now note that, when assumption DNOT holds, the stochastic process [ ] over does not t t ~ is a deterministic function of ~ and ~t and depend on : Because any truthful private history h i

i

i

because types are independent we then have that ~t E [Ki;t+1 h i

t ~t j ~i ] = E [Ki;t+1 h i i[

= E [V

; ]

t+1 j ~i ]

~i;t+1 ; h ~t i

(34) t+1 j ~i ]

E [V

i[

;^]

~i;t+1 ; h ~t i

t+1 j ~i ].

Combining (33) with (34) then gives E [V = E

[ ]j

i[

[V

; ] i[

t+1 j ~i ]

~i;t+1 ; h ~t i

; ]

~it ; h ~t i

1

t j ~i ]

E [V E

[ ]j

i[

[V

;^] i[

t+1 j ~i ]

~i;t+1 ; h ~t i

;^]

~it ; h ~t i

1

t j ~i ]

Using again the fact that the value function coincides with the equilibrium payo¤ with probability one then gives the result. ~ t is a deterministic function of ~t only. The result in (34) Finally note that, when N = 1; h 1 1 is thus always true when the allocation rule is deterministic. We conclude that, when N = 1; the 67

result in the second part of the proposition holds even if assumption DNOT is dispensed with. Proof of Lemma 5. t(

t

Fix an arbitrary history

2

and let t be the …rst period such that

) = 0. Then let s > t be the …rst period after t such that

learning in periods t + 1; : : : ; s, the last s component of t

(1) ^ t ( ) =

t s(

(that is, s

=

t

for

t components of

s

s(

s

) = 1. Because there is no

are necessarily equal to

t,

the last

= t; t + 1; :::; s). Now consider an allocation rule ^ such that

) = 1, (2) for any successor

to

t

, the behavior of ^ is de…ned by the behavior

for the analogous successor s+( t) to s , with ^ 0 if s + ( t) > T , and (3) ^ ^ agrees with elsewhere. Next let be the payment scheme that is obtained from following the

of

s+(

t)

same construction as for ^ : Now note that, because there is no learning during periods of no sales and because there is D E no discounting, the mechanism ^ ; ^ leads to the same payo¤s as h ; i. Repeating the above D E construction for all possible histories 2 gives rise to an ex-ante IC mechanism ^ ; ^ such that D E ^ is a stopping rule and the expected payo¤s of both the buyer and the seller under ^ ; ^ are the

same as under h ; i.

68

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close