koorio.com

º£Á¿ÎÄ¿â ÎÄµµ×¨¼Ò

º£Á¿ÎÄ¿â ÎÄµµ×¨¼Ò

- Risk Aversion, Expected Utility Theory and Insurance
- Chapter 9 Multifactor Models of Risk and Return(Í¶×Ê·ÖÎöÓëÍ¶×Ê×éºÏ¹ÜÀí)
- Can Falling Supply Explain the Rising Return to College for Younger Men A Cohort-Based Anal
- 1. BAYESIAN NETWORK MODELS OF PORTFOLIO RISK AND RETURN
- Quality-adjusted life-years (QALY) utility models under expected utility and rank dependent
- Intertemporal Relation Between The Expected Return And Risk An Evaluation Of Emerging Market
- Ruin Probabilities and Overshoots for General Levy Insurance Risk Processes
- A scenario-based integrated__ approach for modeling carbon__ price risk
- 127. Risk Adjusted Return On Capital Model (revised) 1221
- Risk-based inspection and maintenance systems for steam turbines

Expected Risk-Adjusted Return for Insurance Based Models

Tatiana Solc` a

Diploma thesis in mathematics at ETH Z¡§ urich under the supervision of Prof. Dr. P. Embrechts and Dr. U. Schmock

Spring 2000

Ai miei genitori: per avermi dato la possibilita¡¯ di raggiungere questo traguardo.

Acknowledgements

It is my pleasure to thank Prof. Paul Embrechts for giving me the opportunity to work in this very interesting ?eld. I am very grateful to Dr. Uwe Schmock for his helpful suggestions and support. He always managed to ?nd time to discuss problems related to my work. Next I would like to thank Francesco for his encouragement, constant support and patience. Let me ?nally thank all my friends and schoolmates for their advice and allround help during this work. In particular, I thank Nicola for his pleasant and constructive ¡°pineletters¡± and ¡°talks¡± from Paris, Giacomo and Andrea for A answering my tedious questions concerning L TEX and Carlo for the pleasant time spent working in the computer room. Hereby, I express my sincere gratitude to all these people.

v

Contents

Acknowledgements 1 Introduction 2 Preliminaries 2.1 Risk and Risk Measures . . . . . . . . . . 2.1.1 Notation and properties . . . . . . 2.1.2 The tail conditional expectation . . 2.2 Convergence of random variables . . . . . 2.3 Stationary process and ergodic theorem . . 2.4 Properties of weak convergence . . . . . . 2.5 The model . . . . . . . . . . . . . . . . . . 2.5.1 The idea and the notation . . . . . 2.5.2 The di?erent variants of the model v 1 3 . 3 . 3 . 4 . 5 . 6 . 8 . 9 . 9 . 11 13 15 24 31 33 34 42 43 49

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

3 Expected shortfall 3.1 The asymptotical limit of upper bounds . . . . . . . . . . . . 3.2 The n-dimensional model . . . . . . . . . . . . . . . . . . . . . 3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Standard deviation 4.1 The simplest cases: n = 1 and n = 2 4.2 The general case . . . . . . . . . . . 4.3 The second variant of the model . . . 4.4 The third variant of the model . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

5 Capital allocation 53 5.1 Covariance principle . . . . . . . . . . . . . . . . . . . . . . . 53 5.2 Expected-shortfall principle . . . . . . . . . . . . . . . . . . . 60 A Calculating expected shortfall B The Lagrange multipliers rule vii 63 67

viii

CONTENTS

Chapter 1 Introduction

An insurance company, nowadays, takes an interest in the applications of actuarial techniques for measuring risk and for assessing pro?table areas of business. Its management is faced continually with the task of reconciling the con?icting interests of policyholders and shareholders. The former are interested in strong ?nancial strength, while the latter are more concerned with a return on equity that is commensurate with the risk inherent in their investment. In order to satisfy these needs, the management selects pro?table business and limits the company¡¯s risk. Obviously, an insurance company¡¯s liabilities cannot be entirely foreseen. If companies are to maintain a high degree of ?nancial security, they must e?ciently manage asset and liability portfolios, as well as understand and keep control of the underlying risks. One di?culty is that an insurance company faces many di?erent types of risk, and they are not at all easy to model. For a thorough understanding of them it is essential to have quantitative models, but even though there are many di?erent measures of risk, up to today, none of them can be considered the ¡°best¡±. In this paper, we focus on the kinds of risks which can be represented by random variables. In particular, we analyze a model denoting the risk portfolio of an insurance company. We suppose that the management steers the company by choosing the numbers of independent risks so as to improve the overall expected risk-adjusted return, de?ned as the expected return divided by the assigned risk capital. In other words, with respect to the de?ned model, we try to determine if an optimal portfolio exists. For these purposes, we organized this paper in four parts. In Chapter 2, we ?rst develop some approaches to measuring risk by considering the statement of axioms on a risk measure done by Artzner et al., 1

2

CHAPTER 1. INTRODUCTION

which leads to the concept of coherent risk measure. Moreover, we brie?y introduce some basic concepts of probability theory (like convergence of random variables and ergodic theorem), which will be seen to be useful for our purpose. Then, in Section 2.5, we give a description of the model considered in this whole work and we indicate which methods will be used to measure risk. In more detail, we suppose that the whole pro?t R of an insurance company consisting of n units can be denoted by the sum of the stochastic gains Ri of every unit i ¡Ê {1, . . . , n}. Ri is de?ned as the revenues minus the costs and the losses in the form illustrated in (2.3). Then we will analyze this model using the expected shortfall risk measure, the coherent risk measure suggested by Artzner et al. (1998) and, later, by means of the standard deviation risk measure, which is very popular in practice. In Chapter 3 we discuss the model using the expected shortfall for quantifying risk. We will estimate the performance of the company examining the expected risk-adjusted return r, i.e., r = E [R]/¦Ñ(R), where ¦Ñ denotes the risk measure. In fact, the company¡¯s aim will be to invest its resources optimally and maximize r. We therefore will try to determine an optimal portfolio by choosing the values of N1 , . . . , Nn , which denote the number of contracts of the respective business units 1, . . . , n, such that a maximum for r is attained. In particular, we focus on a proposition which shows the existence of the limit of the upper bounds for the expected risk-adjusted return r. In Chapter 4 we repeat the same approaches using the standard deviation risk measure. In this case we concentrate on the optimization problem de?ned in (4.2), i.e., we will try to determine the optimal number of contracts of every unit i ¡Ê {1, . . . , n} in order to maximize E [R]/C subject to the constraint ¦Ñ(R) ¡Ü C , where C denotes the capital the company wants to invest. We will examine three di?erent variants of the model de?ned in Section 2.5 and we will show that this optimization problem has a solution for every variant. In Chapter 5 we consider two di?erent capital allocation principles, namely the covariance principle and the expected shortfall principle. We will calculate the covariance principle for (R) = ?E [R] + ¦Ê ¦Ò (R) with ¦Ê > 0, and for two variants of the model R considered in Chapter 4. Moreover, for simplicity we consider a company consisting only of two units, but the same results can be computed in a similar way for the general case, too, and we calculate E [Ri | R ¡Ü c] for the multivariate normal case. In the Appendix we brie?y recall some useful technical rules to calculate the expected shortfall.

Chapter 2 Preliminaries

2.1 Risk and Risk Measures

It is not easy to de?ne risk, and we will avoid attempting to give an exact de?nition. Nevertheless in a recent paper, Artzner et al. (1998) have come up with an appropriate description of what risk actually is. In this paper, we consider risk related to the variability of the future value of a position due to uncertain events. Therefore, we treat those kinds of risks which can be represented by random variables, and which indicate the possible future values of positions.

2.1.1

Notation and properties

Let ? be the set of possible states of nature, and assume it is ?nite. By a random variable X we denote the ?nal net worth of a position for each element of ?. Let G be the set of all risk, i.e., the set of all real valued functions on ?. Remark that G can be identi?ed with Rn , where n = card(?). De?nition 2.1. A measure of risk is a mapping ¦Ñ from G into R. Then the real number ¦Ñ(X ) can be interpreted, when positive, as the minimum extra cash to add to the risky position X , or when negative, as the cash amount that can be subtracted from the position. We now consider some properties for a risk measure ¦Ñ de?ned on G listed in the form of axioms. Axiom (Translation invariance). For all X ¡Ê G and all real numbers ¦Á: ¦Ñ(X + ¦Ár) = ¦Ñ(X ) ? ¦Á, where r is the rate of return on a reference riskless investment. 3

4

CHAPTER 2. PRELIMINARIES

Axiom (Subadditivity). For all X1 , X2 ¡Ê G : ¦Ñ(X1 + X2 ) ¡Ü ¦Ñ(X1 ) + ¦Ñ(X2 ). Axiom (Positive homogeneity). For all ¦Ë ¡Ý 0 and all X ¡Ê G : ¦Ñ(¦ËX ) = ¦Ë¦Ñ(X ). Axiom (Monotonicity). For all X and Y ¡Ê G with X ¡Ü Y : ¦Ñ(Y ) ¡Ü ¦Ñ(X ). ¦Ñ(X ) ¡Ý 0.

Axiom (Relevance). For all X ¡Ê G with X ¡Ü 0 and X = 0: Remarks.

? Translation invariance means that adding (resp. subtracting) the sure initial amount ¦Á to (from) the initial position, and investing it in the reference instrument (with rate of return r) the risk measure only decreases (resp. increases) by ¦Á. ? Subadditivity re?ects the diversi?cation of portfolios and ensures that the risk measure behaves reasonably when adding two positions; we can say: ¡°a merger does not create extra risk¡±. These axioms on measures of risk are related to the axioms on acceptance sets, but we won¡¯t treat this topic (for more details see Artzner et al. (1998)). We are interested in the following de?nition then we argue that any risk measure which is to be used to e?ectively regulate or manage risks satis?es these axioms. De?nition 2.2. A risk measure satisfying the four axioms of translation invariance, subadditivity, positive homogeneity and monotonicity is called coherent. In their paper Artzner et al. suggest a speci?c coherent measure called tail conditional expectation and in the following chapter we will study the model of a portfolio using this risk measure.

2.1.2

The ¡°tail conditional expectation¡± measure of risk

In practice there are various methods of measuring risk, and these axioms are not restrictive enough to specify a unique risk measure. The choice of precisely which measure to use should be made on the basis of additional considerations.

2.2. CONVERGENCE OF RANDOM VARIABLES

5

In this work we consider tail conditional expectation (expected shortfall), which, under some assumptions, is the least expensive among those which are coherent and accepted by regulators 1 since they are more conservative than the value-at-risk measurement. Managers and regulators are primarily interested in setting ¡°minimal requirements¡± or a maximal limit on the potential losses. With a shortfall approach, one can answer the question ¡°how bad is bad?¡± by measuring the negative of the average future net worth X of a position, given that X is below the quantile c ¡Ü 0, i.e., ¦Ñ(X ) = E [?X | X ¡Ü c ] , (2.1)

provided that P (X ¡Ü c) > 0. In the following sections we will focus on de?nitions and theorems, which will be useful later for our purposes.

2.2

Convergence of random variables

De?nition 2.3. Let X1 , X2 , . . . and X be real-valued random variables on some probability space (?, F , P ). We say: i) {Xn }n¡ÊN converges to X almost surely (a.s.) if P ({¦Ø | lim Xn (¦Ø ) = X (¦Ø )}) = 1,

n¡ú¡Þ

ii) {Xn }n¡ÊN converges to X in r-th mean (?¡ú) for r > 0 if

r | < ¡Þ for all n and E ( | Xn ? X | r ) ??¡ú 0, E | Xn

n¡ú¡Þ

r

iii) {Xn }n¡ÊN converges to X in probability (?¡ú) if for every ¦Å > 0 P ( | Xn ? X | > ¦Å) ??¡ú 0 . iv) {Xn }n¡ÊN converges to X in distribution (?¡ú) if P (Xn ¡Ü x) ??¡ú P (X ¡Ü x) for all points x at which FX (x) = P (X ¡Ü x) is continuous.

A regulator is a supervisor who takes into account the unfavorable states when allowing a risky position.

1

n¡ú¡Þ n¡ú¡Þ

P

D

6

CHAPTER 2. PRELIMINARIES

Remark. The following implications hold in general: i) Xn ?¡ú X ii) Xn ?¡ú X

r a.s.

=? =?

Xn ?¡ú X Xn ?¡ú X

P

P

=? =?

Xn ?¡ú X , Xn ?¡ú X .

D

D

We now recall the theorem of bounded convergence. Theorem 2.4 (Lebesgue bounded convergence theorem). Consider a a.s. sequence {Xn }n¡ÊN of variables with Xn ?¡ú X . If there is a random variable Y such that E | Y | < ¡Þ and | Xn | ¡Ü Y for all n, then E [Xn ] ??¡ú E [X ] . Proof. See Grimmet and Stirzaker (1992), Chapter 5.6. Remark. It is appropriate to specify that by the convergence in r-th mean the values r = 1 and r = 2 are of most use. Therefore, in these cases, we write, respectively i) Xn ?¡ú X in mean, instead of Xn ?¡ú X , ii) Xn ?¡ú X in mean square, instead of Xn ?¡ú X .

2 1

n¡ú¡Þ

2.3

Stationary process and ergodic theorem

De?nition 2.5. A real-valued process X1 , X2 , . . . is called stationary if for every x1 , . . . , xn ¡Ê R and integer k > 0 P [X1 ¡Ü x1 , . . . , Xn ¡Ü xn ] = P [X1+k ¡Ü x1 , . . . , Xn+k ¡Ü xn ] . Consider a probability space (?, F , P ) and a transformation T : ? ¡ú ?. De?nition 2.6. A transformation T : ? ¡ú ? will be called measurable if for all A ¡Ê F : T ?1 (A) = {¦Ø ¡Ê ? | T (¦Ø ) ¡Ê A} ¡Ê F . De?nition 2.7. A measurable transformation T : ? ¡ú ? will be called measure-preserving if for all A ¡Ê F : P (T ?1 (A)) = P (A). Let T be a measure-preserving transformation on (?, F , P ). De?nition 2.8. A set A ¡Ê F is called invariant if T ?1 (A) = A. We denote by J the set of all invariant A ¡Ê F . Note that J is a ¦Ò -?eld.

2.3. STATIONARY PROCESS AND ERGODIC THEOREM De?nition 2.9. T is called ergodic if P (A) ¡Ê {0, 1} for every A ¡Ê J .

7

Theorem 2.10 (Ergodic theorem). Let T be a measure-preserving transformation on (?, F , P ). Then for any random variable X such that E |X | < ¡Þ: n?1 1 X (T k (¦Ø )) = E [X | J ] a.s. and in mean. lim n¡ú¡Þ n k=0 Proof. See Breiman (1968), pp. 113¨C115. Corollary 2.11. Let T be a measure-preserving and ergodic transformation on (?, F , P ). Then for any random variable X such that E |X | < ¡Þ: 1 lim n¡ú¡Þ n

n?1

X (T k (¦Ø )) = E [X ] a.s. and in mean.

k=0

Proof. See Breiman (1968), p. 115. These general de?nitions and results concerning invariance and ergodicity can be applied to the original stationary process X1 , X2 , . . . by considering a shift transformation T , i.e., if x = (x0 , x1 , . . . ) is a real sequence of values of the stationary process then T x = (x1 , x2 , . . . ). For more details see Grimmet and Stirzaker (1992). The corresponding form of the ergodic theorem for stationary processes is: Theorem 2.12 (Ergodic theorem for stationary processes). Let X1 , X2 , . . . be a stationary process such that E |X1 | < ¡Þ, then 1 lim n¡ú¡Þ n

n

Xk = E [X |J ] a.s. and in mean.

k=1

Proof. See Grimmet and Stirzaker (1992), Chapter 9.5. Furthermore, we de?ne an ergodic stationary process and we write down the corresponding version of the ergodic theorem. De?nition 2.13. Let T be the shift-operator and A a set of real sequences. A stationary process is said to be ergodic if P {(X0 , X1 , . . . ) ¡Ê A} = 0 or 1, whenever A is shift-invariant, i.e., A is invariant with respect to the shiftoperator T .

8

CHAPTER 2. PRELIMINARIES

Remark. Let {Xn }n¡ÊN0 be a real-valued stationary process. Then the following conditions are equivalent: (a) {Xn }n¡ÊN0 is ergodic, (b) P {(X0 , X1 , . . . ) ¡Ê A} = 0 or 1, for every invariant set A,

n 1 (c) limn¡ú¡Þ n j =1 ¦Õ(Xj , Xj +1 , . . . ) = E [¦Õ(X0 , X1 , . . . )], for every measurable function ¦Õ of real sequences, provided the expectation exists, n 1 (d) limn¡ú¡Þ n j =1 ¦Õ(Xj , . . . , Xj +k ) = E [¦Õ(X0 , . . . , Xk )], for every k ¡Ê N0 and every measurable function ¦Õ of k + 1 variables, provided the expectation exists.

For more details see Karlin and Taylor (1975), Chapter 9.5. So, since a stationary process X1 , X2 , . . . is ergodic if every shift-invariant event has probability zero or one, if J has this zero-one property, of course the average converges to E [X1 ]. Hence, we restrict ourselves to: Theorem 2.14 (Ergodic theorem for ergodic stationary processes). If X1 , X2 , . . . is a stationary ergodic process such that E |X1 | < ¡Þ, then 1 lim n¡ú¡Þ n

n

Xk = E [X1 ] a.s. and in mean.

k=1

Proof. See Grimmet and Stirzaker (1992), Chapter 9.5.

2.4

Properties of weak convergence

Let S be a metric space and S the Borel ¦Ò -?eld in S , i.e., S is the smallest ¦Ò -?eld containing all the open sets. Let Cb (S ) be the set of all bounded continuous real functions f on S , and P be a probability measure on S , i.e., a nonnegative, countably additive set function with P (S ) = 1. De?nition 2.15. Let Pn for n ¡Ê N and P be probability measures on (S, S ). w We say that {Pn }n¡ÊN converges weakly to P (Pn ?¡ú P ) if for all f ¡Ê Cb (S ): f dPn ?¡ú

S S

f dP .

De?nition 2.16. A set A in S , whose boundary ?A satis?es P (?A) = 0, is called a P-continuity set.

2.5. THE MODEL

9

Theorem 2.17 (Portmanteau Theorem). Let Pn for n ¡Ê N and P be probability measures on (S, S ). These ?ve conditions are equal: i) Pn ?¡ú P , ii) lim supn¡ú¡Þ S f dPn = functions f : S ¡ú R,

S w

f dP , for all bounded uniformly continuous

iii) lim supn¡ú¡Þ Pn (F ) ¡Ü P (F ), for all closed F ? S , iv) lim inf n¡ú¡Þ Pn (G) ¡Ý P (G), for all open G ? S , v) limn¡ú¡Þ Pn (A) = P (A), for all P -continuity sets A. Proof. See Billingsley (1968), pp. 11¨C14.

2.5

2.5.1

The model

The idea and the notation

The aim of this work is to study a model for the portfolio of an insurance company by means of two di?erent risk measures which take the whole pro?t of the company into consideration. We investigate some possible risk portfolios, which are directly dependent on the number of the contracts to be stipulated and on the expected losses, so as to determine, if possible, the optimal number of contracts in order to maximize the pro?t. Hence, we are going to assume that an insurance company consists of n organizational units or business segments. The whole pro?t of the company (positive value means gain, negative value means loss) will be denoted by

n

R=

i=1

Ri ,

(2.2)

with Ri describing the stochastic gain of the unit i during a ?xed time period (usually one year). Moreover, we suppose that for every unit i ¡Ê {1, . . . , n}

Ni

Ri = ¦Íi Ni ?

j =1

Xi,j ? Yi Ni

(2.3)

where: ? ¦Íi is the premium income for one contract of unit i,

10

CHAPTER 2. PRELIMINARIES ? Ni is the number of contracts of unit i, N1 + ¡¤ ¡¤ ¡¤ + Nn is the whole number of contracts of the insurance company, ? {Xi,j }j ¡ÊN is a sequence of random variables for i ¡Ê {1, . . . , n}: Xi,j represents the loss associated with the risk in the j -th contract of unit i i and N j =1 Xi,j is the total (annual) claim amount of unit i, ? Yi is a random variable which represents, for one contract of unit i, a safety loading needed to obviate both the possible approximation error in the calculation of the optimal premium and unforeseeable, catastrophic events.

Therefore, we will examine how to determine an optimal portfolio, which guarantees a maximal pro?t. We will discuss this problem for two risk measures: the expected shortfall, the coherent risk measure suggested by Artzner et al. (1998), and the standard deviation, which is very popular in practice. We will analyze the portfolio by means of a risk adjusted performance measurement, this means that we compute the return in the way commonly called RORAC (return on risk adjusted capital). In particular, considering any risk measure ¦Ñ and provided that ¦Ñ(R) = 0, we de?ne the expected risk-adjusted return for a risk R as E [R] r(R, ¦Ñ) = . (2.4) ¦Ñ(R) In practice, the company tries hard to improve its results and this means that its aim is to maximize r. Therefore, we will try to determine the optimal value for the numbers of contracts Ni of every unit i ¡Ê {1, . . . , n}, such that this maximum is attained. We now introduce our running examples of risk measure. We start considering the expected shortfall de?ned by ¦Ñ(R) = E [?R | R ¡Ü c] with c ¡Ü 0 . (2.5)

The expected shortfall is an alternative risk measure to the quantile which overcomes some of the theorical de?ciencies of the latter. In particular, this risk measure gives some information about the size of potential losses, given that a loss bigger than c has occurred. Then, the same consideration will be repeated using the standard deviation risk measure de?ned by ¦Ñ(R) = ?E [R] + ¦Ê ¦Ò (R), (2.6)

where ¦Ê > 0 is some positive constant and ¦Ò (R) denotes the standard deviation operator, i.e., ¦Ò (R) = Var(R).

2.5. THE MODEL

11

2.5.2

The di?erent variants of the model

In Chapter 4 we will examine the expected risk-adjusted return for a risk R considering the standard deviation risk measure. In particular, we will study three di?erent variants of the model representing the whole pro?t of a company consisting of n units, previously de?ned as

n Ni

R=

i=1

¦Íi Ni ?

j =1

Xi,j ? Yi Ni .

(2.7)

Moreover, we make the following assumptions, which are valid through all of Chapter 4. ? {Xi,j }j ¡ÊN , for all i ¡Ê {1, . . . , n}, are sequences of independent identically distributed (i.i.d.) random variables, with Xi,j having a ?nite mean denoted by ?i , i.e, ?i = E [Xi,j ] for all i ¡Ê {1, . . . , n} and j ¡Ê N,

? Y1 , . . . , Yn are random variables having ?nite mean denoted by ? ?i , i.e, ? ?i = E [Yi ] for all i ¡Ê {1, . . . , n},

? all the sequences {Xi,j }j ¡ÊN , with i ¡Ê {1, . . . , n}, and the random variables Y1 , . . . , Yn are independent. Remark. In Chapter 3, we will examine the above-de?ned model with the aid of the expected shortfall as risk measure, but for the moment we do not need such strong assumptions. In fact it is not necessary to require that {Xi,j }j ¡ÊN are sequences of i.i.d. random variables, but rather we will prove some statements for which it is enough to assume the existence of real constants ?1 , . . . , ?n such that, for all i ¡Ê {1, . . . , n}, 1 N

N

Xi,j ? ? ? ¡ú ?i

j =1

N ¡ú¡Þ

in mean.

Therefore, for a model of the form of (2.7) with the above-mentioned assumptions, there are di?erent variants depending on the choice of the distributions of the random variables and on the nature of the parameters Ni for i ¡Ê {1, . . . , n}. Then, in Chapter 4 we will examine three di?erent cases which result from (2.7) if we choose Ni for i ¡Ê {1, . . . , n} in di?erent ways. In particular, we consider the Ni ¡¯s ?rst as positive integers, then as Poisson-distributed random variables and, ?nally, as the sum of both.

12

CHAPTER 2. PRELIMINARIES More precisely, these possibilities can be represented as follows:

Ni 1. R = n j =1 Xi,j ? Yi Ni , i=1 ¦Íi Ni ? with Ni a positive integer for all i ¡Ê {1, . . . , n}, Ni 2. R = n i=1 ¦Íi Ni ? j =1 Xi,j ? Yi Ni , with Ni ? POIS(¦Ëi ), ¦Ëi > 0 for all i ¡Ê {1, . . . , n}, Ni 3. R = n i=1 ¦Íi Ni ? j =1 Xi,j ? Yi Ni , pois ?x with Ni = Ni + Ni , where Ni?x are positive integers and Nipois are Poisson-distributed random variables, i.e., Nipois ? POIS(¦Ëi ), ¦Ëi > 0, for all i ¡Ê {1, . . . , n}.

Chapter 3 Results for the expected shortfall risk measure

In this chapter, we will examine the portfolio of a company, analyze the pro?t represented by the model previously described, and consider the expected shortfall as an aid to quantifying risk. Without loss of generality and, in order to simplify the following calculations, we start by examining a one-dimensional model. This means we assume that an insurance company consists of only one business unit. Hence, we denote the whole pro?t by

N

R(N ) = ¦ÍN ?

j =1

Xj ? Y N,

(3.1)

where X1 , X2 , . . . is a real-valued process which represents the claim sizes; Y is a random variable having ?nite mean ? ? and N is some positive integer. Recall that it is assumed that Y and the sequence {Xj }j ¡ÊN are independent, and that we still do not require any particular properties for the process {Xj }j ¡ÊN . Moreover, we assume ¦Í ? ? ? ? ? > 0, i.e., the company chooses a premium income rate greater than the expected losses. In order to estimate the performance of the company, we consider the expected risk-adjusted return for the risk R(N ) which, in this case, can be represented by rN = E R(N ) , E ? R(N ) | R(N ) ¡Ü c c ¡Ü 0, (3.2)

provided that P R(N ) < c > 0. 13

14

CHAPTER 3. EXPECTED SHORTFALL

More generally, in Section 3.2 we will take the same approach considering an n-dimensional model

n Ni

R(N1 , . . . , Nn ) =

i=1

¦Íi Ni ?

j =1

Xi,j ? Yi Ni

(3.3)

with all the assumptions listed in Section 2.5.1. Now, we begin with the following lemma which is valid in general for any risk represented by a random variable R ¡Ê L1 (?, F , P ). Lemma 3.1. Let R be an integrable random variable on a probability space (?, F , P ), this means R ¡Ê L1 (?, F , P ), and let c0 denote the in?mum of the support of the distribution of R, i.e., c0 = inf {c ¡Ê R | P (R ¡Ü c) > 0}. Then the map (c0 , ¡Þ) c ?¡ú E [?R | R ¡Ü c] is non-increasing. Proof. First, we consider E [?R | R ¡Ü c]. This conditional expectation is de?ned by E ? R 1{R¡Üc} . E [?R | R ¡Ü c] = P [R ¡Ü c] Then we consider any constants c1 , c2 ¡Ê (c0 , ¡Þ) such that c1 < c2 . Given that, since {R ¡Ü c1 } and {c1 < R ¡Ü c2 } are disjoint, it holds that 1{R¡Üc2 } = 1{R¡Üc1 } + 1{c1 <R¡Üc2 } , we can write: E [?R | R ¡Ü c2 ] E [?R | R ¡Ü c1 ] P [R ¡Ü c1 ] + E [?R | c1 < R ¡Ü c2 ] P [c1 < R ¡Ü c2 ] = . P [R ¡Ü c2 ] Then, in order to prove the monotony, it su?ces to show that E [?R | R ¡Ü c2 ] ¡Ü E [?R | R ¡Ü c1 ] . We split the proof into two cases. i) If P (c1 < R ¡Ü c2 ) = 0, then it holds that 1{R¡Üc2 } = 1{R¡Üc1 } P -a.s. and P (R ¡Ü c1 ) = P (R ¡Ü c2 ). Therefore, it follows that E [?R | R ¡Ü c2 ] = E [?R 1{R¡Üc2 } ] E [?R 1{R¡Üc1 } ] = = E [?R | R ¡Ü c1 ] . P [R ¡Ü c2 ] P [R ¡Ü c1 ]

3.1. THE ASYMPTOTICAL LIMIT OF UPPER BOUNDS ii) If P (c1 < R ¡Ü c2 ) > 0, then we can write

15

E [?R | R ¡Ü c2 ] E [?R | R ¡Ü c1 ] P [R ¡Ü c1 ] + E [?R | c1 < R ¡Ü c2 ] P [c1 < R ¡Ü c2 ] = P [R ¡Ü c2 ] E [?R | R ¡Ü c1 ] P [R ¡Ü c2 ] ? P [c1 < R ¡Ü c2 ] = P [R ¡Ü c2 ] E [?R | c1 < R ¡Ü c2 ] P [c1 < R ¡Ü c2 ] + P [R ¡Ü c2 ] = E [?R | R ¡Ü c1 ] P [c1 < R ¡Ü c2 ] + E [?R | c1 < R ¡Ü c2 ] ? E [?R | R ¡Ü c1 ] P [R ¡Ü c2 ]

?c2 ¡Ü¡¤¡¤¡¤<?c1 ¡Ý?c1 0<¡¤¡¤¡¤¡Ü1 <0

< E [?R | R ¡Ü c1 ] . Thus, E [?R | R ¡Ü c2 ] ¡Ü E [?R | R ¡Ü c1 ], as desired.

3.1

The asymptotical limit of upper bounds

In this section we focus on the portfolio of a company represented by the model de?ned in (3.1). In particular, we will study the expected risk-adjusted return rN for the risk R(N ). The main question is whether it is possible to ?nd a maximal value for rN thereby determining an optimal value for N such that this maximum is attained. We will show that the limit of the upper bounds for the expected risk-adjusted return exists under quite general conditions. We also derive an explicit formula for the limit. Recall that, considering the expected shortfall risk measure, rN is de?ned by (3.2). Now, for any c ¡Ü 0 we can write rN = E R(N ) = E ? R(N ) | R(N ) ¡Ü c E ? 1 RN = ¦Í ? N

N E [R(N )] N R (N ) R (N ) | N N

¡Ü

c N

.

Therefore, if we de?ne Xj ? Y,

j =1

(3.4)

since E [R(N )] = N (¦Í ? ? ? ? ?), it holds that ¦Í???? ? rN = E ? RN | RN ¡Ü

c N

.

(3.5)

16

CHAPTER 3. EXPECTED SHORTFALL

In particular, the aim of this chapter is to prove the following proposition. Proposition 3.2. Let X1 , X2 , . . . be a real-valued process and Y be a inteN 1 grable random variable. Let RN = ¦Í ? N j =1 Xj ? Y and assume that a real constant ? exists such that 1 N

N

Xj ? ? ? ¡ú?

j =1

N ¡ú¡Þ

in mean .

Moreover, suppose that P (¦Í ? ? ? Y = 0) = 0 . Then, it holds that ? ? ¡ú E [¦Í ? ? ? Y | ¦Í ? ? ? Y ¡Ü 0] . E [RN | RN ¡Ü 0] ? For the proof of this proposition we need the next lemma. Lemma 3.3. Let RN be de?ned as above and assume that a real constant ? exists such that 1 N

N

N ¡ú¡Þ

Xj ? ? ? ¡ú?

j =1

N ¡ú¡Þ

in mean.

Let Z be an integrable random variable and f be a bounded uniformly Lipschitz continuous function. Then, it holds that ? ? ¡ú E Z f (¦Í ? ? ? Y ) . E Z f (RN ) ? Proof. Let ¦Å > 0. Since Z ¡Ê L1 , because of the theorem of Lebesgue, a real constant K > 0 exists such that E | Z | 1{ | Z | >K } ¡Ü ¦Å . Moreover, f is Lipschitz continuous, i.e., a real constant ¦Á exists such that | f (RN ) ? f (¦Í ? ? ? Y ) | ¡Ü ¦Á | RN ? ¦Í ? ? ? Y | .

N ¡ú¡Þ

3.1. THE ASYMPTOTICAL LIMIT OF UPPER BOUNDS Then, E | Z f (RN ) ? Z f (¦Í ? ? ? Y ) | ¡Ü E | Z | | f (RN ) ? f (¦Í ? ? ? Y ) | ¡Ü E | Z | 1{ | Z | >K } | f (RN ) ? f (¦Í ? ? ? Y ) |

¡Ü 2 sup|f |

17

+ E | Z | 1{ | Z | ¡ÜK } | f (RN ) ? f (¦Í ? ? ? Y ) |

¡ÜK

¡Ü E | Z | 1{ | Z | >K } 2 sup|f | + K E | f (RN ) ? f (¦Í ? ? ? Y ) |

¡Ü¦Å ¡Ü ¦Á |R N ? ( ¦Í ? ? ? Y ) |

N ¡ú¡Þ

? ? ¡ú 2 ¦Å sup |f | ¡Ü 2 ¦Å sup |f | + K ¦Á E | RN ? ¦Í ? ? ? Y | ?

1 =| N N j =1

Xj ??|

Therefore, since ¦Å is arbitrary, we can conclude that, as desired, ? ? ¡ú 0. E | Z f (RN ) ? Z f (¦Í ? ? ? Y ) | ? Proof (Proposition 3.2). We remember that E [RN | RN ¡Ü 0] is de?ned by E [RN | RN ¡Ü 0] = E RN 1{RN ¡Ü0} . P [RN ¡Ü 0] (3.6)

N ¡ú¡Þ

We ?rst consider E [RN 1{RN ¡Ü0} ]. Clearly, it holds that E RN 1{RN ¡Ü0} = E 1 ¦Í? N

N

Xj ? Y 1{RN ¡Ü0}

j =1

(3.7)

= ¦ÍP [RN ¡Ü 0] ? E Since

1 N

N

Xj 1{RN ¡Ü0} ? E Y 1{RN ¡Ü0} .

j =1

? ? ¡ú¦Í???Y RN ?

N ¡ú¡Þ

in mean ,

and because the convergence in mean implicates the convergence in distribution, it holds that P (RN ¡Ü 0) ? ? ? ¡ú P (¦Í ? ? ? Y ¡Ü 0) .

N ¡ú¡Þ

18

CHAPTER 3. EXPECTED SHORTFALL

Therefore, since it is assumed that P (¦Í ? ? ? Y = 0) = 0, we can prove, on the one hand, that E 1 N

N

Xj 1{RN ¡Ü0} ? ? ? ¡ú ? P (¦Í ? ? ? Y ¡Ü 0)

j =1

N ¡ú¡Þ

(3.8)

and, on the other, that E Y 1{RN ¡Ü0} ? ? ? ¡ú E [Y | ¦Í ? ? ? Y ¡Ü 0] P (¦Í ? ? ? Y ¡Ü 0) . From E it follows that E 1 N

N

N ¡ú¡Þ

(3.9)

1 N

N

Xj ? ?

j =1

? ? ? ¡ú0

N ¡ú¡Þ

Xj 1{RN ¡Ü0} ? ? 1{RN ¡Ü0}

j =1

=E

1 N 1 N

N

Xj ? ? 1{RN ¡Ü0}

j =1 N ¡Ü1

N ¡ú¡Þ

¡ÜE Thus, it still remains to be calculated

N ¡ú¡Þ

Xi ? ?

j =1

? ? ? ¡ú 0.

lim E [? 1{RN ¡Ü0} ]

and

N ¡ú¡Þ

lim E [Y 1{RN ¡Ü0} ] .

Let Z be any integrable random variable. We will determine

N ¡ú¡Þ

lim E [Z 1{RN ¡Ü0} ]

and then we will employ the result for Z = ? and Z = Y . As shown in Figure 3.1, let fn and gn bounded continuous functions de?ned by ? 1 ? if x ¡Ê (?¡Þ, ? n ], ?1 , 1 fn (x) = ?nx , if x ¡Ê (? n , 0) , ? ? 0, if x ¡Ê [0, ¡Þ) , and ? ? if x ¡Ê (?¡Þ, 0] , ?1 , 1 gn (x) = 1 ? nx , if x ¡Ê (0, n ), ? ? 1 0, if x ¡Ê [ n , ¡Þ) .

3.1. THE ASYMPTOTICAL LIMIT OF UPPER BOUNDS

fn (x) gn (x)

19

1

1

1 ?n

x

1 n

x

Figure 3.1: Continuous approximation of the indicator function 1(?¡Þ,0] from below by fn and from above by gn .

Instead of 1{RN ¡Ü0} it is useful to write 1(?¡Þ,0] ? RN , then it follows that 1(?¡Þ,? 1 ] ? RN ¡Ü fn (RN ) ¡Ü 1(?¡Þ,0] ? RN ¡Ü gn (RN ) ¡Ü 1(?¡Þ, 1 ] ? RN

n n

and clearly for Z ¡Ý 0 it holds that E Z fn (RN ) ¡Ü E Z 1(?¡Þ,0] ? RN ¡Ü E Z gn (RN ) . (3.10)

For a general integrable random variable Z , consider the decomposition Z = Z + ? Z ? with Z + = max{Z, 0} and Z ? = max{?Z, 0}. Then, considering the left and right side respectively of the inequality (3.10) separately, since fn and gn are Lipschitz continuous functions, i.e., | fn (RN ) ? fn (¦Í ? ? ? Y ) | ¡Ü n | RN ? ¦Í ? ? ? Y | and analogously | gn (RN ) ? gn (¦Í ? ? ? Y ) | ¡Ü n | RN ? ¦Í ? ? ? Y | , it follows from Lemma 3.3 that E Z fn (RN ) ? ? ? ¡ú E Z fn (¦Í ? ? ? Y ) and E Z gn (RN ) ? ? ? ¡ú E Z gn (¦Í ? ? ? Y ) . Moreover, since fn (¦Í ? ? ? Y ) ??¡ú 1(?¡Þ,0) ? (¦Í ? ? ? Y ) pointwise, from the Lebesgue theorem 2.4 it follows that E Z fn (¦Í ? ? ? Y ) ??¡ú E Z 1(?¡Þ,0) ? (¦Í ? ? ? Y )

n¡ú¡Þ n¡ú¡Þ N ¡ú¡Þ N ¡ú¡Þ

20 and analogously

CHAPTER 3. EXPECTED SHORTFALL

E Z gn (¦Í ? ? ? Y ) ??¡ú E Z 1(?¡Þ,0] ? (¦Í ? ? ? Y ) . Now, by the assumption P [¦Í ? ? ? Y = 0] = 0, E Z 1(?¡Þ,0) ? (¦Í ? ? ? Y ) = E Z 1(?¡Þ,0] ? (¦Í ? ? ? Y ) and consequently we obtain the statements (3.8) and (3.9) replacing Z = ? and Z = Y , respectively, i.e., E ?1{RN ¡Ü0} ? ? ? ¡ú E ?1{¦Í ???Y ¡Ü0} = ? P ¦Í ? ? ? Y ¡Ü 0 , and ? ? ¡ú E Y 1{¦Í ???Y ¡Ü0} E Y 1{RN ¡Ü0} ? = E [Y | ¦Í ? ? ? Y ¡Ü 0] P (¦Í ? ? ? Y ¡Ü 0) . Thus, to conclude, E [RN 1{RN ¡Ü0} ] ? ? ? ¡ú ¦Í ? ? ? E [Y | ¦Í ? ? ? Y ¡Ü 0] P (¦Í ? ? ? Y ¡Ü 0) and given that this last quantity equals E [¦Í ? ? ? Y | ¦Í ? ? ? Y ¡Ü 0] P (¦Í ? ? ? Y ¡Ü 0) , the statement in the proposition follows directly from the de?nition of the conditional expectation. Consequently, from Lemma 3.1 and from Proposition 3.2 we obtain the limit of the upper bounds for the expected risk-adjusted return rN . In fact, we also have N ¡ú¡Þ rN ? ? ? ¡ú r¡Þ , as RN ? ? ? ¡ú ¦Í ? ? ? Y in mean and in law and hence P RN ¡Ü and ? ? ¡ú E (¦Í ? ? ? Y ) 1¦Í ???Y ¡Ü0 . E RN 1RN ¡Üc/N ?

N ¡ú¡Þ N ¡ú¡Þ N ¡ú¡Þ N ¡ú¡Þ N ¡ú¡Þ

n¡ú¡Þ

c N ¡ú¡Þ ? ? ? ¡úP ¦Í???Y ¡Ü0 N

3.1. THE ASYMPTOTICAL LIMIT OF UPPER BOUNDS

21

Remark. We obtain the same result if we assume a model of the form of R(N ) = ¦ÍN ? N j =1 Xj ? Y N such that X1 , X2 , . . . is a real stationary ergodic process having ?nite mean ? = E [X1 ], and P (¦Í ? ? ? Y = 0) = 0. In fact, as a result of the Ergodic Theorem 2.14, it holds that 1 N

N

Xj ? ? ? ¡ú?

j =1

N ¡ú¡Þ

a.s. and in mean.

Therefore, the requirements of the Proposition 3.2 are satis?ed and it follows that, if N ¡ú ¡Þ, the expected risk-adjusted return for R(N ) converges to r¡Þ . We would like to illustrate Proposition 3.2 by concentrating on normally distributed random variables and by giving a numerical example. Example 1. Assume that an insurance company consists of one business unit, i.e., n = 1. Then we consider

N

R(N ) = ¦ÍN ?

j =1

Xj ? Y N.

Suppose that {Xj }j ¡ÊN is an i.i.d. sequence of random variables having normal distribution and that Y is a normally distributed random variable independent of the sequence {Xj }j ¡ÊN . In particular, we write Xj ? N ?, ¦Ò 2 Y ?N ? ?, ¦Ò ? .

2

for all j ¡Ê N ,

Because of the particular properties of the normal distribution it holds that 1 N and therefore RN = ¦Í ? 1 N

N N

Xj ? N ?,

j =1

¦Ò2 N

Xj ? Y ? N ¦Í ? ? ? ? ?,

j =1

¦Ò2 +¦Ò ?2 . N

For more details see Johnson et al. (1994), Chapter 13, Section 3. Based on RN , we can calculate the expected risk-adjusted return rN . Given c ¡Ü 0, de?ne cN = c/N ? E [RN ] Var(RN ) = c/N ? ¦Í + ? + ? ? ¦Ò 2 /N + ¦Ò ?2 .

22

CHAPTER 3. EXPECTED SHORTFALL

Using (A.3) from the Appendix, we obtain rN = E R(N ) E ? R(N ) | R(N ) ¡Ü c E [RN ] = E [?RN | RN ¡Ü c/N ] ¦Í???? ? . = ?¦Í + ? + ? ? + ¦Ò 2 /N + ¦Ò ? 2 log ¦µ (cN )

Note that the last expression is mathematically meaningful even if N is not an integer. If c = 0, the above expression simpli?es to rN = For every c ¡Ü 0 we get c¡Þ := lim cN =

N ¡ú¡Þ

?cN cN + log ¦µ (cN )

.

?¦Í + ? + ? ? , ¦Ò ? ¦Í???? ?

hence r¡Þ = lim rN =

N ¡ú¡Þ

?¦Í + ? + ? ?+¦Ò ? log ¦µ (c¡Þ ) ?c¡Þ . = c¡Þ + log ¦µ (c¡Þ )

Let us now choose the parameter as follows ¦Í = 5, ? = 1, ? ? = 2, ¦Ò = 2, ¦Ò ?=1

and we consider di?erent values for c. In this particular case we choose c = 0, c = ?1, c = ?2, c = ?5, respectively.

In Figure 3.2, a graphical representation of this situation is shown. In Figure 3.3, we can observe that the returns {rN }N ¡ÊN converge to r¡Þ for any c ¡Ü 0, as asserted in Proposition 3.2 . In fact, r¡Þ = 2 ?2 + log ¦µ (?2) ? = 5.359 .

3.1. THE ASYMPTOTICAL LIMIT OF UPPER BOUNDS

23

0

1

2

3

r

4

5

6

2

4 N

6

8

10

Figure 3.2: This ?gure shows the risk-adjusted returns rN of the form mentioned

above for the parameter values under the assumptions that parameter c = 0 (dotted line), c = ?1 (short-dashed line), c = ?2 (dashed line), c = ?5 (solid line), respectively, and the limit r¡Þ (long-dashed line).

0 0

1

2

3

r

4

5

6

10

20 N

30

40

50

Figure 3.3: Plot of the same returns rN shown in Figure 3.2 for greater values

of N .

24

CHAPTER 3. EXPECTED SHORTFALL

3.2

The n-dimensional model

The previous considerations are also valid for an n-dimensional model. In fact, in general we can denote the whole pro?t of a company consisting of n units by 1 R(N1 , . . . , Nn ) = N1 ¦Í1 ? N1

N1

X1,j ? Y1 + ¡¤ ¡¤ ¡¤

j =1

+ Nn ¦Ín ?

1 Nn

Nn

Xn,j ? Yn ,

j =1

where Ni are some positive integers for all i ¡Ê {1, . . . , n} and Y1 , . . . , Yn are integrable random variables. Moreover, assume that real constants ?1 , . . . , ?n exist such that, for all i ¡Ê {1, . . . , n}, 1 Ni Then we de?ne ? (N1 , . . . , Nn ) = R(N1 , . . . , Nn ) R N1 + ¡¤ ¡¤ ¡¤ + N n and we obtain that the expected risk-adjusted return for all c ¡Ü 0 can be written as r(N1 , . . . , Nn ) = E R(N1 , . . . , Nn ) E ? R(N1 , . . . , Nn ) | R(N1 , . . . , Nn ) ¡Ü c

E R(N1 ,...,Nn ) N1 +¡¤¡¤¡¤+Nn Ni

i Xi,j ? ? ? ¡ú ?i N ¡ú¡Þ

in mean.

j =1

= E =

?R(N1 ,...,Nn ) R(N1 ,...,Nn ) N1 +¡¤¡¤¡¤+Nn N1 +¡¤¡¤¡¤+Nn

¡Ü

c N1 +¡¤¡¤¡¤+Nn

? (N1 , . . . , Nn ) E R ? ? (N1 , . . . , Nn )? ? (N1 , . . . , Nn ) ¡Ü E ?R ?R

c N1 +¡¤¡¤¡¤+Nn

.

Moreover, we assume that N1 , . . . , Nn ¡ú ¡Þ, such that Ni ?¡ú ti N1 + ¡¤ ¡¤ ¡¤ + N n exists, for all i = 1, . . . , n .

3.2. THE N -DIMENSIONAL MODEL Then, clearly, it follows that

n

25

(t1 , . . . , tn ) ¡Ê [0, 1] and that

n

and

i=1

ti = 1

(3.11)

n

N1 ,...,Nn ¡ú¡Þ ? (N1 , . . . , Nn ) ? ? ¡Þ (t) := ? ? ? ? ? ? ¡úR R

ti (¦Íi ? ?i ? Yi ) in mean,

i=1

for all t = (t1 , . . . , tn ) satisfying (3.11). Therefore assuming that ? ¡Þ (t) = 0 = 0, P R it holds that

N1 ,...,Nn ¡ú¡Þ ? ¡Þ (t) | R ? (N1 , . . . , Nn ) ¡Ü 0] ? ? ¡Þ (t) ¡Ü 0 . ? (N1 , . . . , Nn ) | R ? ? ? ? ? ? ¡úE R E [R

Note that this is the same statement we arrived at (3.2) for the simplest model. For this reason, we omit the proof in this case, because it su?ces to adapt the arguments considered there. Analogously, as before, we obtain that the limit of the upper bounds exists and will be denoted by

1 r N1 , . . . , Nn ? ? ? ? ? ? ? ¡ú r¡Þ t1 , . . . , tn , N ,...,Nn ¡ú¡Þ

(3.12)

where r¡Þ t1 , . . . , tn = ? ?i ? ? ?i ) . ? ¡Þ (t1 , . . . tn ) | R ? ¡Þ (t1 , . . . tn ) ¡Ü 0 E ?R

n i=1 ti (¦Íi

We can observe that the limit of the upper bounds obtained for the n? ¡Þ (t) = 0 = 0 dimensional model is directly dependent on (t1 , . . . tn ). If P R for every t = (t1 , . . . , tn satisfying (3.11), then (3.12) holds for all these t and it makes sense to determine the maximum of r¡Þ t1 , . . . , tn for (t1 , . . . , tn ) satisfying (3.11). In fact, if r¡Þ is upper semi-continuous, then this maximum ? is attained by t? 1 , . . . , tn , which can be calculated through

? (t? 1 , . . . , tn ) = argmax (t1 ,...,tn )

ti ¡Ê[0,1]

? ?i ? ? ?i ) . ? ¡Þ (t1 , . . . tn ) | R ? ¡Þ (t1 , . . . tn ) ¡Ü 0 E ?R

n i=1 ti (¦Íi

Then, for all (t1 , . . . , tn ) satisfying (3.11), it holds that

? r¡Þ t1 , . . . , tn ¡Ü r¡Þ t? 1 , . . . , tn < ¡Þ .

Note that, if Y1 , . . . , Yn are independent with continuous distribution func? ¡Þ (t) = 0 = 0 certainly holds for all (t1 , . . . , tn ) satisfying tions, then P R (3.11).

26

CHAPTER 3. EXPECTED SHORTFALL

Example 2. Assume that an insurance company consists of two business units, i.e., n = 2. Then

N1 N2

R(N1 , N2 ) = ¦Í1 N1 ?

j =1

X1,j ? Y1 N1 + ¦Í2 N2 ?

j =1

X2,j ? Y2 N2 .

2 2 Assume that Xi,j ? N ?i , ¦Òi ) and Yi ? N ? ?i , ¦Ò ?i ) for all i ¡Ê {1, 2} and j ¡Ê N. Furthermore, assume that all the random variables are independent. For simplicity, we denote ? ?i = ¦Íi ? ?i ? ? ?i for i ¡Ê {1, 2},

? 1 + N2 ? ?2 ? ? = N1 ? and

2 2 2 2 2 2 ¦Ò ? 2 = N1 ¦Ò1 + N1 ¦Ò ?1 + N2 ¦Ò2 + N2 ¦Ò ?2 .

Then we have R(N1 , N2 ) ? N ? ?, ¦Ò ?2 . Using (A.3) as in the previous example, we obtain for c ¡Ü 0 r(N1 , N2 ) = E [R(N1 , N2 )] E [?R(N1 , N2 ) | R(N1 , N2 ) ¡Ü c] ? ? . = ?)/¦Ò ? ?? ?+¦Ò ? log ¦µ (c ? ?

Assume that N1 , N2 ¡ú ¡Þ such that N1 N1 ,N2 ¡ú¡Þ ? ? ? ? ? ¡ú t exists. N 1 + N2 Then ? ? N1 ,N2 ¡ú¡Þ ? ? ? ? ? ¡ú t? ?1 + (1 ? t) ? ?2 N1 + N 2 ¦Ò ? N1 ,N2 ¡ú¡Þ ? ? ? ? ? ¡ú N1 + N 2

2 2 t2 ¦Ò ?1 + (1 ? t)2 ¦Ò ?2 .

and

Hence with r¡Þ (t) = =

1 2 r(N1 , N2 ) ? ? ? ? ? ¡ú r¡Þ (t) ,

N ,N ¡ú¡Þ

?2 t? ?1 + (1 ? t) ? ?t ? ?1 ? (1 ? t) ? ?2 + ?c(t) c(t) + log ¦µ c(t)

2 2 t2 ¦Ò ?1 + (1 ? t)2 ¦Ò ?2 log ¦µ

c(t)

,

3.2. THE N -DIMENSIONAL MODEL where c(t) = ? ?2 t? ?1 + (1 ? t) ?

2 2 t2 ¦Ò ?1 + (1 ? t)2 ¦Ò ?2

27

.

We now want to ?nd the t? ¡Ê [0, 1] which maximizes the limiting expected risk-adjusted return [0, 1] t ¡ú r¡Þ (t). We start with the following lemma. Lemma 3.4. The function (?¡Þ, 0] is monotonely decreasing. Proof. Using the substitution z = x2 /2, we get for every c ¡Ê (?¡Þ, 0) ?¡Þ c c ?(c) x e?z ? ¦µ(c) = ?(x) dx < =? ?(x) dx = ¡Ì ? , ? c c 2¦Ð c2 /2 ?¡Þ ?¡Þ c hence 1 log ¦µ (c) < ?1 . c 1 log ¦µ (c), c c¡ú? c c + log ¦µ (c)

Therefore, it su?ces to show that g : (?¡Þ, 0) ¡ú R with g (c) = is monotonely decreasing. Since g (c) = ?(c) c ¦µ(c) = ?c2 ?(c) ¦µ(c) ? ?(c) ¦µ(c) ? c ?(c)2 , c2 ¦µ(c)2 c ¡Ê (?¡Þ, 0) ,

we obtain that for every c ¡Ê (?¡Þ, 0) g (c) ¡Ü 0 ?? (c2 + 1)¦µ(c) + c ?(c) ¡Ý 0 1 ?(c) ?? 1 + 2 ¦µ(c) ¡Ý ? . c c Since for c < 0 1+ 1 ¦µ(c) = c2 ¡Ý the lemma is proved.

c ?¡Þ c ?¡Þ

1+

1 ?(x) dx c2 ?(c) 1 ?(x) ? ?c =? 1 + 2 ?(x) dx = ? , ? x x ?¡Þ c

28

CHAPTER 3. EXPECTED SHORTFALL

r(c=0) 01 2 3 4 5 6

10

r(c=-1) 01 2 3 4 5 6

10

8 6 N2 4 2 0

0 4 6 N1 8

8 6 N2 4 2 0

10

2

0

2

4

6 N1

8

10

r(c=-2) 01 2 3 4 5 6

10

r(c=-5) 01 2 3 4 5 6

10

8 6 N2 4 2 0

0 4 6 N1 8 10

8 6 N2 4 2 0

2

0

2

4

6 N1

8

10

Figure 3.4: Plot of the risk-adjusted return r(N1 , N2 ): at the top left for the

parameter c = 0, at the top right for c = ?1, at the bottom left for c = ?2, at the bottom right for c = ?5.

Due to the lemma, it remains for us to ?nd the t? ¡Ê [0, 1] which minimizes [0, 1] t ¡ú c(t). Solving the equation c (t? ) = 0 leads to t? = and c(t? ) = ? This is the minimum in [0, 1], because c(1) = ? ? ?2 ? ?1 > c(t? ) and c(0) = ? > c(t? ) . ¦Ò ?1 ¦Ò ?2 ? ?2 ? ?2 1 2 + . 2 2 ¦Ò ?1 ¦Ò ?2

2 ?2 ? ?1 ¦Ò , 2 2 ? ?1 ¦Ò ?2 +? ?2 ¦Ò ?1

3.2. THE N -DIMENSIONAL MODEL

29

5

4

3

2

1

0 0

5

N1

10 2 4

6

8

10

N2

Figure 3.5: Plot of the risk-adjusted return r(N1 , N2 ) for the parameters c = ?5, c = ?2, c = ?1, c = 0, respectively.

In Figures 3.4¨C3.7 we give a graphical representation assuming that ¦Í1 = 5, ¦Í2 = 3, and choosing c = 0, c = ?1, c = ?2, c = ?5, respectively. ?1 = 1, ?2 = 1, ? ?1 = 2, ? ?2 = 1, ¦Ò1 = 2, ¦Ò2 = 1, ¦Ò ?1 = 1, ¦Ò ?2 = 1,

30

CHAPTER 3. EXPECTED SHORTFALL

6

5

4

3

2

1 0 0 10 20 N 30 40 501 0.5 t

Figure 3.6: Plot of the risk-adjusted return r(N1 , N2 ) for N1 = t N and N2 =

(1 ? t) N and for the parameters c = ?5, c = ?2, c = ?1, c = 0, respectively.

6

5

4

3

2 0 0.2 0.4 t 0.6 0.8 1

Figure 3.7: Plot of the limit r¡Þ (t).

3.3. CONCLUSION In this case, it holds that r¡Þ (t) = where c(t) = ¡Ì ?c(t) c(t) + log ¦µ c(t) ,

31

t+1 . t2 ? 2 t + 1 2 3

Moreover, the t? ¡Ê [0, 1] which maximises r¡Þ (t) is t? = and therefore

r¡Þ (t? ) ? = 6.429 .

We can observe that, for the same values of N1 and N2 , increasing c, r(N1 , N2 ) becomes greater. This is a direct consequence of Lemma 3.1 which shows that the map c ¡ú E [?R | R ¡Ü c] decreases monotonously for any c ¡Ê (c0 , ¡Þ), where c0 = inf {c ¡Ê R | P (R ¡Ü c) > 0}. Moreover, assuming that N1 , N2 ¡ú ¡Þ such that N1 /(N1 + N2 ) ¡ú t, we can observe in the following ?gures that in any case ¨C for every c ¡Ü 0 ¨C the return r(N1 , N2 ) converges to r¡Þ (t). In particular, in Figure 6, we can observe that r¡Þ (t) ¡Ü r¡Þ (t? ) ? = 6.429 for all t ¡Ê [0, 1].

3.3

Conclusion

If we examine the portfolio of an insurance company represented by the model (3.1) and we try to optimize it by determining the number of contracts, we can conclude that, for the expected risk-adjusted return rN de?ned by (3.2), a limit exists. Considering the n-dimensional model we obtain a limit of the upper bounds which depends directly on the partition of the contracts between the di?erent n units of the company. An optimal partition can be calculated in the case of normal distributions. In general, it has to be done numerically. Nevertheless, we cannot ?nd the optimal number of contracts which should assure a maximal risk-adjusted return to the company. Usually, the company has a ?xed capital C at its disposal to invest. So, considering that the insurance company wants to invest this capital C , it is possible to calculate an optimal solution to the following problem: maximize E R(N1 , . . . Nn ) C subject to ¦Ñ R(N1 , . . . , Nn ) ¡Ü C

and Ni ¡Ý 0 .

32

CHAPTER 3. EXPECTED SHORTFALL

In this case, using the expected shortfall as risk measure, we do not obtain a general solution, but it exists and can be found numerically if the distributions of Yi for i = 1, . . . , n are known. If the random variables Yi for i = 1, . . . , n are independent and normally distributed, this numerical calculation is not very complicated, because of the particular properties of the normal distribution. However, we can observe that the obtained solution is not always a good one, because frequently the optimal values for Ni for i = 1, . . . , n are not integers. For this reason we have to consider once more which optimal integer values we shall choose. In the following chapters we will show that we obtain better solutions to such a problem if we take the standard deviation risk measure into consideration.

Chapter 4 Results for the standard deviation risk measure

In this chapter, we want to study the model of the portfolio of an insurance company de?ned in Section 2.5 once again. We will now examine the riskadjusted performance of the company by studying the return r= E [R] ¦Ñ(R)

and using the standard deviation risk measure of R de?ned by ¦Ñ(R) = ?E [R] + ¦Ê¦Ò (R), where ¦Ê > 0 is some positive constant and ¦Ò (R) denotes the standard deviation of the risk R. In consequence, we assume the existence of second moments for the following, because the de?nition of ¦Ò (R) clearly requires this. Considering the portfolio of the form R = n i=1 Ri , let Ri be real-valued random variables on (?, F , P ) having ?nite expectation and ?nite second moment for all i ¡Ê {1, . . . , n}

Ni

Ri = ¦Íi Ni ?

j =1

Xi,j ? Yi Ni ,

(4.1)

where Ni are some positive integers for all i ¡Ê {1, . . . , n}. For the sequences {Xi,j }j ¡ÊN and for the random variables Yi with i ¡Ê {1, . . . , n} the assumptions mentioned in Section 2.5.2 are still valid. Moreover, we assume that (R1 , . . . , Rn ) are non-trivial and this means that ¦Ñ(R) takes values other than zero, where R is the portfolio of (R1 , . . . , Rn ). 33

34

CHAPTER 4. STANDARD DEVIATION

Let C > 0 be the capital that the company wants to invest. Now we look at the following problem: maximize E [R] C subject to ¦Ñ(R) ¡Ü C Ni ¡Ý 0 integers for i ¡Ê {1, . . . , n} but not all of them equal to zero. (4.2)

We try to determine the optimal number of contracts in order to obtain a maximal return r, which means a better performance for the company. First, we will examine the simplest cases with n = 1 and n = 2, and then we will take the same approach for a general n-dimensional model. In Section 4.3 we will repeat the same procedure, again analyzing the same model, but this time with the assumption that Ni are Poisson-distributed random variables with parameters ¦Ëi > 0 for all i = 1, . . . , n. This last assumption is convenient for us, because in this case the optimal solution of the optimization problem consists of real positive values which are no longer required to be integers.

4.1

The simplest cases: n = 1 and n = 2

We start considering n = 1. Let the portfolio be represented by

N

R(N ) = ¦ÍN ?

j =1

Xj ? Y N

(4.3)

and take the assumptions mentioned in Section 2.5. We then have that N is a positive integer, {Xj }j ¡ÊN are uncorrelated random variables with ?nite mean ?, Y has ?nite mean ? ? and the sequence {Xj }j ¡ÊN and Y are uncorrelated. Moreover, we assume that both Xj for all j ¡Ê N and Y have ?nite variances denoted by ¦Ò 2 = Var(Xj ), independent of j ¡Ê N, and ¦Ò ? 2 = Var(Y ) respectively. Then we can compute E R(N ) = N (¦Í ? ? ? ? ?) ¡Ì ?2. ¦Ñ R(N ) = ?N (¦Í ? ? ? ? ?) + ¦Ê N ¦Ò 2 + N 2 ¦Ò

4.1. THE SIMPLEST CASES: N = 1 AND N = 2 We get that the optimization problem (4.2) can be written as maximize N (¦Í ? ? ? ? ?) C ¡Ì ?2 ¡Ü C , subject to ? N (¦Í ? ? ? ? ?) + ¦Ê N ¦Ò 2 + N 2 ¦Ò N > 0 integer.

35

(4.4)

For the sake of practicality we de?ne ¦Í???? ? a= . C Note that a is a positive constant because ¦Í ? ? ? ? ? is assumed to be positive and the capital C to invest is positive too. Moreover, from now on, we assume ¦Ê ¦Ò ? > a. (4.5) C This means that even for the case ¦Ò 2 = 0, we get ¦Ñ R(N ) > 0 for every N ¡Ê N, hence, accepting contracts involves real risk. Furthermore, for any ¦Ò2 ¡Ý 0 , ¦Ñ R(N ) > 0, N ¡ú¡Þ N i.e., the risk per contract stays positive even in the limit N ¡ú ¡Þ. Returning to the optimization problem; if we square the constraint (4.4) we reduce the previous formulation to lim maximize aN ¦Ê2 2 ¦Ê2 2 2 + N ¦Ò ? ? a ¦Ò ? 2a ? 1 ¡Ü 0 , C2 C2 N > 0 integer. (4.6)

subject to N 2

Given that ¦Ê¦Ò ? /C is assumed to be greater than a, we obtain that the constraint (4.6) is ful?lled if N ¡Ê [z1 , z2 ] ¡É N, where with z1 and z2 we denote the zeros of the constraint itself. So, since a is positive and the function to maximize becomes greater with increasing N , the value of N which assures all requirements is the greatest positive integer which satis?es the condition (4.6). We have that the greatest zero of (4.6) has the following form: ? z2 =

¦Ê2 2 ¦Ò C2

? 2a +

¦Ê2 2 ¦Ò C2

2

? 2a

2

+4

¦Ê2 2 ¦Ò ? C2

? a2

¦Ê 2 C ? 2 ? a2 2¦Ò ¡Ì ¦Ò2C 2 2aC 2 ? ¦Ê2 ¦Ò 2 + ¦Ê ¦Ê2 ¦Ò 4 ? 4a¦Ò 2 C 2 + 4? . = 2 ¦Ê2 ¦Ò ? 2 ? a2 C 2

36

CHAPTER 4. STANDARD DEVIATION

We observe that z2 is clearly positive but not necessarily an integer. So, in consequence of these arguments we can conclude that the solution of the optimization problem (4.4) is N? = ¡Ì ¦Ò2C 2 2aC 2 ? ¦Ê2 ¦Ò 2 + ¦Ê ¦Ê2 ¦Ò 4 ? 4a¦Ò 2 C 2 + 4? , 2 ¦Ê2 ¦Ò ? 2 ? a2 C 2

where with x we denote the greatest integer smaller than x, i.e., x = max{k ¡Ê Z | k ¡Ü x} . Now let n = 2. The portfolio has the form

N1 N2

R(N1 , N2 ) = ¦Í1 N1 ?

j =1

X1,j ? Y1 N1 + ¦Í2 N2 ?

j =1

X2,j ? Y2 N2 ,

(4.7)

where N1 , N2 are positive integers, {X1,j }j ¡ÊN and {X2,j }j ¡ÊN are sequences of uncorrelated random variables which are uncorrelated to Y1 and Y2 . We suppose that the sequences {X1,j }j ¡ÊN and {X2,j }j ¡ÊN are uncorrelated and Y1 and Y2 are uncorrelated, too. Furthermore, we assume that the ?rst two moments of {X1,j }j ¡ÊN and {X2,j }j ¡ÊN do not depend on j ¡Ê N. Moreover, we de?ne

2 ?i = E [Xi,j ] and ¦Òi = Var(Xi,j ), for all j ¡Ê N and for all i = 1, 2, 2 ? ?i = E [Yi ] and ¦Ò ?i = Var(Yi ), for i = 1, 2,

where both means and variances are ?nite. Then, we calculate ?1 ) + N2 (¦Í2 ? ?2 ? ? ?2 ) , E R(N1 , N2 ) = N1 (¦Í1 ? ?1 ? ? ¦Ñ R(N1 , N2 ) = ?N1 (¦Í1 ? ?1 ? ? ?1 ) ? N2 (¦Í2 ? ?2 ? ? ?2 )

2 2 2 2 2 2 + N1 ¦Ò ?1 + N2 ¦Ò2 + N2 ¦Ò ?2 . + ¦Ê N1 ¦Ò1

In order to simplify the following calculations we de?ne two constants as above ¦Í1 ? ?1 ? ? ¦Í2 ? ?2 ? ? ?1 ?2 a1 = and a2 = . C C Once again, we assume that these constants are positive, because we consider contracts with positive expectation.

4.1. THE SIMPLEST CASES: N = 1 AND N = 2

37

The initial optimization problem (4.2) therefore turns out to have the following form maximize subject to a1 N1 + a2 N2 ? a1 N1 ? a2 N2 + ¦Ê C

2 2 2 2 2 2 N1 ¦Ò1 + N1 ¦Ò ?1 + N2 ¦Ò2 + N2 ¦Ò ?2 ¡Ü 1 ,

(4.8) N1 , N2 ¡Ý 0 integers, not both of them equal to zero. It is useful, analogously, as before, to square the constraint (4.8), so we obtain maximize a 1 N 1 + a2 N 2 2 ¦Ê2 2 ¦Ê2 2 2 ¦Ê subject to N1 ¦Ò ? 2 a ¦Ò ? 2 a ¦Ò ? 2 ? a2 + N + N 1 2 2 1 1 C2 1 C2 2 C2 1 2 2 ¦Ê 2 + N2 ¦Ò ?2 ? a2 (4.9) 2 ? 2a1 a2 N1 N2 ? 1 ¡Ü 0 , 2 C N1 , N2 ¡Ý 0 integers, not both of them equal to zero.

Note that, both here and in the n-dimensional case, we make the same assumption as in (4.5) and, in general, this implies ¦Ê ¦Ò ?i > ai C ¦Ê ¦Ò ?1 > a1 C for all i ¡Ê {1, . . . , n}.

In particular in this case, where n = 2, it is assumed that and ¦Ê ¦Ò ?2 > a2 . C (4.10)

Now we consider only the constraint (4.9). It can be written as N1 ¦Ê2 2 ¦Ê2 2 + N ¦Ò ? 2 a ¦Ò ? 2a2 + q ¡Ü 1 , 1 2 1 C2 C2 2 (4.11)

where with q we denote a real quadratic form q = XT AX, with X= N1 N2 (4.12)

and A representing the symmetric matrix of q relative to N1 and N2 A=

¦Ê2 2 ¦Ò ? C2 1

? a2 1 ?a1 a2

¦Ê2 2 ¦Ò ? C2 2

?a1 a2 ? a2 2

.

38

CHAPTER 4. STANDARD DEVIATION

This notation is useful, since any real quadratic form q , with q = XT AX as in (4.12) can be reduced to a diagonalized representation, and by an orthogonal change of variables the expression (4.11) can be rewritten in a form related to an ellipse. It therefore follows that the optimization problem can be solved directly. So we denote the eigenvalues of A by ¦Á1 and ¦Á2 and the associated orthonormal eigenvectors by P1 and P2 , then we can represent q in the form q= where B = PT AP = and P = P1 , P2 = p11 p12 p21 p22 , ¦Á1 0 0 ¦Á2 x y

T

B

x y

= ¦Á1 x2 + ¦Á2 y 2 ,

(4.13)

i.e., P is the orthogonal matrix with the orthonormal eigenvectors of A in its columns. For more details see Gilbert and Gilbert (1995) Chapter 8.5. From now on, we denote Y = (x, y )T . Then, by a change of variables from N1 , N2 to x, y according to the rule that X = PY, the constraint considered until now can be re-written as follows: ¦Â1 x + ¦Â2 y + ¦Á1 x2 + ¦Á2 y 2 ¡Ü 1. (4.14)

Recall that if A is a real and symmetric matrix, its eigenvalues are real, so ?1 , ¦Ò ?2 , ¦Ê and C . ¦Á1 , ¦Á2 , ¦Â1 and ¦Â2 are real constants dependent on a1 , a2 , ¦Ò In particular, ¦Á1 and ¦Á2 have the following forms: ¦Ái = 1 ¦Ê2 2 2 2 ?2 ¦Ò ? +¦Ò ? a2 1 ? a2 ¡À 2 C2 1 ¦Ê2 2 2 2 (? ¦Ò ?¦Ò ?2 ) ? a2 1 + a2 C2 1

2 2 + 4a2 1 a2

,

where we choose ¦Á1 to correspond to the plus sign and ¦Á2 to the minus sign. Both constants are positive due to (4.10). Then the values of the constants ¦Â1 and ¦Â2 can be calculated too, but for reasons of space, we will not list these formulae at full length. However it holds that ¦Âi = p1i for i = 1, 2. ¦Ê2 2 ¦Ê2 2 ¦Ò ? 2 a ¦Ò ? 2a2 + p 1 2i C2 1 C2 2

4.1. THE SIMPLEST CASES: N = 1 AND N = 2

39

The normalized eigenvector Pi associated to ¦Ái can be represented as follows vi Pi = , vi with vi = ? a2 1 ? ¦Ái + a1 a2 ¦Ê2 2 ?a1 a2 ? C 2 ¦Ò ? 2 ? a2 2 ? ¦Ái

¦Ê2 2 ¦Ò ? C2 1

.

Consequently, the form of the constraint mentioned in (4.14) can be transformed once again, so as to get the following form related to an ellipse ¦Â1 x + ¦Â2 y + ¦Á1 x2 + ¦Á2 y 2 ¡Ü 1

2 2 + ¦Á2 (x ? ¦Ã2 )2 ? ¦Ã2 ¡Ü 1, ?? ¦Á1 (x ? ¦Ã1 )2 ? ¦Ã1

with ¦Ãi =

¦Âi , 2¦Ái

?? ¦Á1 x ? ¦Ã1 ??

2

+ ¦Á2 x ? ¦Ã2

2

2

2 2 ? ¦Á1 ¦Ã1 ? ¦Á2 ¦Ã2 ¡Ü1 2

¦Á2 y ? ¦Ã2 ¦Á1 x ? ¦Ã1 + ¡Ü 1. 2 2 2 2 1 + ¦Á1 ¦Ã1 + ¦Á2 ¦Ã2 1 + ¦Á1 ¦Ã1 + ¦Á2 ¦Ã2

(4.15)

To make it easier, we de?ne ¦Åi =

2 2 1 + ¦Á1 ¦Ã1 + ¦Á2 ¦Ã2 ¦Ái

for i = 1, 2 .

Clearly, ¦Å1 and ¦Å2 represent the two half axes of the ellipse given in (4.15). By the same change of variable, the function to maximize a1 N 1 + a2 N 2 becomes a ?1 x + a ?2 y with a ? i = a1 p 1 i + a2 p 2 i for i = 1, 2 . (4.16)

To recapitulate, we now want to solve the following problem: maximize f (x, y ) subject to the condition g (x, y ) ¡Ü 0 , where f (x, y ) = a ?1 x + a ?2 y , x ? ¦Ã1 g (x, y ) = ¦Å2 1

2

(4.17)

y ? ¦Ã2 + ¦Å2 2

2

? 1.

40

y

CHAPTER 4. STANDARD DEVIATION

f (x, y ) = const

¦Å2 ¦Å1 (¦Ã1 , ¦Ã2 ) g (x, y ) = 0

x

Figure 4.1: Illustration of the optimization problem 4.17

Let us ignore the restrictions coming from N1 , N2 ¡Ê N0 with N1 , N2 = 0, 0 for the moment. An optimization problem of this form can be illustrated by Figure 4.1, where the line f (x, y ) = const is orthogonal to (? a1 , a ?2 ). We see that the required maximal value for f (x, y ) is reached on the boundary of the ellipse described by g (x, y ) = 0, at the point on the ellipse whose tangent is parallel to the straight line. The optimal solution is (x? , y ? ) and it can be calculated with the aid of Lagrange multipliers. In fact, by introducing suitable multipliers, the constrained extremum problem can be treated as one of an ordinary extremum. More precisely, as follows from the Lagrange multipliers rule, it holds that: if f has a relative extremum in (x? , y ? ) subject to the constraint g (x? , y ? ) = 0 and if gy (x? , y ? ) = 0, then a real number ¦Ä called the Lagrange multiplier exists such that (x? , y ? , ¦Ä ) is the critical point of the function H de?ned by H (x, y, ¦Ä ) = f (x, y ) + ¦Äg (x, y ). For more details see Appendix B. Therefore, the pertinent problem to resolve is ? ? ?Hx (x, y, ¦Ä ) = fx (x, y ) + ¦Ägx (x, y ) = 0 (4.18) Hy (x, y, ¦Ä ) = fy (x, y ) + ¦Ägy (x, y ) = 0 ? ? H¦Ä (x, y ) = g (x, y ) = 0.

4.1. THE SIMPLEST CASES: N = 1 AND N = 2

41

We get a system of three equations and in our case we ?nd a solution with the aid of the computer. Then, through the inverse change of variables, we can ?nally determine the values for N1 and N2 , respectively, that solve the initial problem (4.8). We obtain that the number of contracts which guarantee a maximal risk-adjusted return are

? N1

=

¡Ì 2 2 2 2 2 2 a1 ¦Ò ?2 ¦Ê s + C 2 a1 a2 ¦Ò2 ? a2 ?2 ¦Ò ?2 + ¦Ê2 ¦Ò1 2 ¦Ò1 ? 2a1 ¦Ò

2 2 2 2 2 C 2 a2 ?2 + a2 ?1 ?1 ¦Ò ?2 ? ¦Ê2 ¦Ò 1¦Ò 2¦Ò

and

? N2

=

¡Ì 2 2 2 2 2 2 ?1 ¦Ê s + C 2 a1 a2 ¦Ò1 ? a2 ?1 ¦Ò ?1 a2 ¦Ò + ¦Ê2 ¦Ò2 1 ¦Ò2 ? 2a2 ¦Ò

2 2 2 2 2 C 2 a2 ?2 + a2 ?1 ?1 ¦Ò ?2 ? ¦Ê2 ¦Ò 1¦Ò 2¦Ò

,

where

2 2 2 2 2 2 2 2 C2 4 ¦Ò ?1 ¦Ò ?2 ? a1 ¦Ò1 ¦Ò ?2 ? a2 ¦Ò2 ¦Ò ?1 ? a1 ¦Ò2 ? a2 ¦Ò1 2 2 a2 ?2 + a2 ?1 1¦Ò 2¦Ò 2 4 2 4 2 + ¦Ê2 ¦Ò1 ¦Ò ?2 + ¦Ò2 ¦Ò ?1

s=

.

? ? Note that N1 and N2 de?ned above are not necessarily integers, so they have to be rounded in practice.

Remark. We can proceed analogously even if we consider a model of the same form as (4.7) and we allow correlation between the two random variables Y1 , Y2 . In fact, it holds that E R(N1 , N2 ) = N1 ¦Í1 ? ?1 ? ? ?1 + N2 ¦Í2 ? ?2 ? ? ?2 and Var R(N1 , N2 ) = Var(R1 + R2 ) = Var(R1 ) + Var(R2 ) + 2 Cov(R1 , R2 ) 2 2 2 2 2 2 = N1 ¦Ò ?1 + N1 ¦Ò ? 1 + N2 ¦Ò ?2 + N2 ¦Ò ?2 + 2 N1 N2 ¦ÑY ¦Ò ?1 ¦Ò ?2 , because Cov(R1 , R2 ) = N1 N2 Cov(Y1 , Y2 ) = N1 N2 ¦ÑY ¦Ò ?1 ¦Ò ?2 , where ¦ÑY = corr(Y1 , Y2 ). Therefore, if we want to solve the optimization problem of the form of (4.2) for such a model R, we must ?nd an optimal solution in the same way as before. As a matter of fact, in this case the

42

CHAPTER 4. STANDARD DEVIATION

optimization problem turns out to be very similar to (4.9). In particular, the constraint has the following form N1

2 ¦Ê2 2 ¦Ê2 2 2 ¦Ê ¦Ò ? 2 a ¦Ò ? 2 a ¦Ò ? 2 ? a2 + N + N 1 2 2 1 1 C2 1 C2 2 C2 1 2 2 ¦Ê + N2 ¦Ò ? 2 ? a2 ?1 ¦Ò ?2 ? 1 ¡Ü 0 2 ? 2 N1 N2 a1 a2 ? ¦ÑY ¦Ò C2 2

and can be written as N1 ¦Ê2 2 ¦Ê2 2 ¦Ò ? 2 a ¦Ò ? 2a2 + q ¡Ü 0, + N 1 2 C2 1 C2 2

T

where with q we denote a real quadratic form q= with A=

¦Ê2 2 ¦Ò ? C2 1

N1 N2

A

N1 N2 .

? a2 ?a1 a2 + ¦ÑY ¦Ò ?1 ¦Ò ?2 1 ¦Ê2 2 2 ?a1 a2 + ¦ÑY ¦Ò ?1 ¦Ò ?2 ¦Ò ? ? a2 C2 2

Thus, adapting the previously considered argumentations (such as change of variables, form related to an ellipse, Lagrange multipliers) we can compute the optimal solution.

4.2

The general case

n

If we consider a general model related to a ?rm consisting of n units, i.e., R=

i=1

Ri ,

with Ri of the form of (4.1),

we can proceed analogously, as before. In fact, similar arguments can be adapted to solve an n-dimensional optimization problem, too. Starting from the initial problem of the form of (4.2), we square the constraint ¦Ñ(R) ¡Ü C and then we represent it with help of a quadratic form de?ned from an n ¡Á n?matrix A. Diagonalizing A and by an orthogonal change of variables, we can rewrite the constraint in a form related to a conic section. Therefore, we get a problem with new variables, i.e., x1 , . . . , xn instead of N1 , . . . , Nn and it has the following form maximize f (x1 , . . . , xn ) subject to the condition g (x1 , . . . , xn ) ¡Ü 0,

4.3. THE SECOND VARIANT OF THE MODEL

43

where, as before, the maximal value for f (x1 , . . . , xn ) is reached on the boundary described by g (x1 , . . . , xn ) = 0. ? This means that the optimal solution (x? 1 , . . . , xn ) can be found employing the Lagrange multipliers rule (see Appendix B for more details). As a consequence, we have to solve a system of n + 1 equations with respect to the unknowns x1 , . . . , xn , ¦Ä Hxi (x1 , . . . , xn , ¦Ä ) = fxi (x1 , . . . , xn ) + ¦Ägxi (x1 , . . . , xn ) = 0 , i = 1, . . . , n H¦Ä (x1 , . . . , xn ) = g (x1 , . . . , xn ) = 0 . (4.19) Then, by the inverse orthogonal change of variables, we can determine the solution of the initial problem with respect to the original variables N1 , . . . , Nn . Due to the length of the expression related to the two-dimensional case, we ? ? avoid listing the values of N1 , . . . , Nn , i.e., the optimal solution of the initial problem.

4.3

The second variant of the model: the Ni¡¯s are random variables

In this section, we still consider the same model representing the whole portfolio of a company ¨C but with one di?erence. In fact, we assume that the number of contracts for every unit i, denoted by Ni with i ¡Ê {1, . . . , n}, are Poisson-distributed random variables with parameter ¦Ëi > 0. To resume, we have

n Ni

R=

i=1

¦Íi Ni ?

j =1

Xi,j ? Yi Ni ,

(4.20)

with ? Ni ? POIS(¦Ëi ) for all i ¡Ê {1, . . . , n}, ? {Xi,j }j ¡ÊN are independent1 sequences of i.i.d. random variables with ?nite means and ?nite variances for all i ¡Ê {1, . . . , n}

2 ?i = E [Xi,j ] and ¦Òi = Var(Xi,j ),

Note that these independence assumptions are introduced for simplicity. In fact, even if we consider a model which does not ful?l these supplementary requirements, we can proceed in a similar way.

1

44

CHAPTER 4. STANDARD DEVIATION ? Y1 , . . . , Yn are independent1 random variables with both means and variances ?nite

2 ? ?i = E [Yi ] and ¦Ò ?i = Var(Yi ),

? the sequences {Xi,j }j ¡ÊN , i ¡Ê {1, . . . , n}, are independent of the random variables Y1 , . . . , Yn . Moreover, the number of contracts N1 , . . . , Nn are assumed to be independent of the sequences {Xi,j }j ¡ÊN for all i ¡Ê {1, . . . , n} and of the random variables Y1 , . . . , Yn . Recall that the aforementioned assumption ¦Íi ? ?i ? ? ?i ¦Ê ¦Ò ?i > C C is still valid for all i ¡Ê {1, . . . , n}. We are now interested in the solution of the optimization problem maximize E [R] C subject to ¦Ñ(R) ¡Ü C ¦Ëi > 0 for all i ¡Ê {1, . . . , n}. (4.21)

We observe that this problem is analogous to (4.2), but that this time we have weaker conditions, in fact the values of the solution are no longer required to be integers. Before considering this problem, we must do some calculations needed in the following. First, in the next Proposition we recall two particular equalities concerning expectation and variance. Proposition 4.1. Let X be an R-valued random variable de?ned on a probability space ?, F , P with a ?nite second moment and let G be a sub-¦Ò -?eld of F . Then E [X ] = E E [X | G ] Var(X ) = E Var X | G + Var E [X | G ] .

Proof. See Fristedt and Gray (1997), Chapter 23.

4.3. THE SECOND VARIANT OF THE MODEL

45

Then, since Ni is independent of the sequence {Xi,j }j ¡ÊN and of Yi and because of the Wald Identity we obtain

n Ni

E [R] =

i=1 n

E Ni ¦Íi ?

j =1

Xi,j ? Ni Yi

Ni

=

i=1 n

E [Ni ¦Íi ] ? E

j =1

Xi,j ? E [Ni Yi ]

=

i=1 n

¦Íi E [Ni ] ? E [Ni ]E [Xi,j ] ? E [Ni ]E [Yi ] ¦Ëi ¦Íi ? ?i ? ? ?i

i=1

=

and since all the sequences {Xi,j }j ¡ÊN with i ¡Ê {1, . . . , n} are independent of Y1 , . . . , Yn it holds that

n Ni

Var(R) =

i=1 n

Var Ni ¦Íi ?

j =1

Xi,j ? Ni Yi

Ni

=

i=1

Var(Ni ¦Íi ) + Var

j =1 Ni

Xi,j + Var(Ni Yi ) ? 2 Cov Ni ¦Íi , Ni Yi

Ni

? 2 Cov Ni ¦Íi ,

j =1

Xi,j

+ 2 Cov

j =1

Xi,j , Ni Yi

.

First, we calculate the single variances and covariances separately: Var(Ni ¦Íi ) = ¦Íi2 Var(Ni ) = ¦Ëi ¦Íi2 ,

Ni

Var

j =1

2 2 Xi,j = E [Ni ¦Òi ] + Var(Ni ?i ) = ¦Ëi ¦Òi + ?2 i , 2 2 Var(Ni Yi ) = E [Ni2 ¦Ò ?i ] + Var(Ni ? ?i ) = (¦Ë2 ¦Òi + ¦Ëi ? ?2 i + ¦Ëi )? i, Ni Ni

Cov Ni ¦Íi ,

j =1

Xi,j

= ¦Íi Cov Ni ,

j =1 Ni

Xi,j

Ni

= ¦Íi E Ni

j =1

Xi,j ? E [Ni ]E

j =1

Xi,j

= ¦Íi E Ni2 ?i ? E [Ni ]E [Ni ]?i = ¦Ëi ¦Íi ?i ,

46

Ni

CHAPTER 4. STANDARD DEVIATION

Ni Ni

Cov

j =1

Xi,j , Ni Yi

= E N i Yi

j =1

Xi,j ? E

j =1 2

Xi,j E Ni Yi

? i ? E Ni ? i ? ?i = E Ni2 ?i ? = ¦Ëi ?i ? ?i , Cov Ni ¦Íi , Ni Yi = ¦Íi Cov Ni , Ni Yi = ¦Íi E [Ni2 Yi ] ? E [Ni ]2 E [Yi ] ?i . = ¦Ëi ¦Íi ? Then, if we resume, we obtain

n

Var(R) =

i=1

2 2 2 + ?2 ¦Òi + ¦Ëi ? ?2 ?i ¦Ëi ¦Íi2 + ¦Ëi ¦Òi i + (¦Ëi + ¦Ëi )? i + 2¦Ëi ?i ?

? 2¦Ëi ¦Íi ?i ? 2¦Ëi ¦Íi ? ?i

n

=

i=1

2 ?i + ¦Ëi ¦Íi2 + ?2 ?2 ?i ? 2¦Íi ?i ? 2¦Íi ? ?i ¦Ë2 i¦Ò i +? i + 2?i ? 2 2 + ¦Òi +¦Ò ?i n

=

i=1

2 2 2 ?i + ¦Ëi (¦Íi ? ?i ? ? ?i )2 + ¦Òi +¦Ò ?i ¦Ë2 i¦Ò

.

Remark. Note that, if we allow dependence between the sequences {Xi,j }j ¡ÊN , for i ¡Ê {1, . . . , n}, and also between the random variables Y1 , . . . , Yn , it holds that

n Ni

Var(R) =

i=1

Var Ni ¦Íi ?

j =1 n?1 n

Xi,j ? Ni Yi

Ni Nk

+2

i=1 k>i

Cov Ni ¦Íi ?

j =1

Xi,j ? Ni Yi , Nk ¦Ík ?

j =1

Xk,j ? Nk Yk ,

but we will not deal with this case in detail. In order to determine the solution of (4.21) we take the same approach as in the previous section. This means that we consider the simplest cases with n = 1 and n = 2. Because of the length of the expressions we will not reiterate the general case, but all the following argumentations can be

4.3. THE SECOND VARIANT OF THE MODEL

47

adapted in order to solve an n-dimensional optimization problem with respect to the model (4.20). Let n = 1 and

N

R(N ) = ¦ÍN ?

j =1

Xj ? Y N

with the above-mentioned assumptions. Recall that in this section N is a Poisson-distributed random variable with parameter ¦Ë > 0. From the previous calculations we obtain E R(N ) = ¦Ë¦Í ? ? ? ? ? Var R(N ) = ¦Ë2 ¦Ò ? 2 + ¦Ë (¦Í ? ? ? ? ?)2 + ¦Ò 2 + ¦Ò ?2 . We now have the standard deviation risk measure for R(N ) which is ¦Ñ R(N ) = ?¦Ë(¦Í ? ? ? ? ? ) + ¦Ê ¦Ë2 ¦Ò ? 2 + ¦Ë (¦Í ? ? ? ? ?)2 + ¦Ò 2 + ¦Ò ?2 . Consequently, in this case the initial problem (4.21) has the following form maximize subject to ¦Ë(¦Í ? ? ? ? ?) C ? 2 + ¦Ë (¦Í ? ? ? ? ?)2 + ¦Ò 2 + ¦Ò ?2 ¡Ü C , ? ¦Ë(¦Í ? ? ? ? ? ) + ¦Ê ¦Ë2 ¦Ò ¦Ë > 0. Analogously, as before, it is useful to introduce some positive constants in order to get shorter expressions. We de?ne a= ¦Í???? ? , C b= ¦Í???? ?

2

+ ¦Ò2 + ¦Ò ?2.

We remember that ¦Ê/C is assumed to be greater than a, because of the argumentations already mentioned. Moreover, if we square the constraint we obtain a new formulation of the problem to solve, which is very similar to (4.6). maximize a¦Ë ¦Ê2 2 ¦Ê2 2 ¦Ò ? ? a b ? 2a ? 1 ¡Ü 0 , + ¦Ë C2 C2 ¦Ë > 0. (4.22)

subject to ¦Ë2

48

CHAPTER 4. STANDARD DEVIATION

We observe that, with respect to the problem (4.6), the only di?erence is the value of the constant related to the variable ¦Ë. In this case we will not rewrite all the passages to calculate the solution, because those in the previous section are easily adaptable. Therefore, we can determine that the optimal solution of (4.22) is ¡Ì ¦Ò2C 2 2aC 2 ? ¦Ê2 b + ¦Ê ¦Ê2 b2 ? 4abC 2 + 4? ¦Ë = . 2 ¦Ê2 ¦Ò ? 2 ? a2 C 2

?

Now, let n = 2 and

N1 N2

R(N1 , N2 ) = ¦Í1 N1 ?

j =1

X1,j ? Y1 N1 + ¦Í2 N2 ?

j =1

X2,j ? Y2 N2 ,

with the assumptions cited at the beginning of this section. From the previous calculation we obtain E R(N1 , N2 ) = ¦Ë1 ¦Í1 ? ?1 ? ? ?1 + ¦Ë2 ¦Í2 ? ?2 ? ? ?2

2 2 2 Var R(N1 , N2 ) = ¦Ë2 ?1 + ¦Ë1 (¦Í1 ? ?1 ? ? ?1 )2 + ¦Ò1 +¦Ò ?1 1¦Ò 2 2 2 + ¦Ë2 ?2 + ¦Ë2 (¦Í2 ? ?2 ? ? ?2 )2 + ¦Ò2 +¦Ò ?2 , 2¦Ò

and introducing the constants a1 = ?1 ¦Í1 ? ?1 ? ? , C ?2 ¦Í2 ? ?2 ? ? , a2 = C 2 2 ?1 )2 + ¦Ò1 +¦Ò ?1 , b1 = (¦Í1 ? ?1 ? ? 2 2 2 b2 = (¦Í2 ? ?2 ? ? ?2 ) + ¦Ò2 + ¦Ò ?2 ,

it follows that the problem to be solved in this case is maximize subject to a 1 ¦Ë 1 + a2 ¦Ë 2 ? a1 ¦Ë1 ? a2 ¦Ë2 + ¦Ë1 , ¦Ë2 > 0 . Remember that we assume ¦Ê ¦Ò ?1 > a1 C and ¦Ê ¦Ò ?2 > a2 . C ¦Ê C

2 2 ¦Ë1 b 1 + ¦Ë2 ?1 + ¦Ë2 b 2 + ¦Ë2 ?2 ¡Ü 1, 1¦Ò 2¦Ò

(4.23)

4.4. THE THIRD VARIANT OF THE MODEL

49

Once again, squaring the main constraint, we get a new formulation very similar to (4.9) with the exception of the constants related to ¦Ë1 and to ¦Ë2 respectively, i.e., a1 ¦Ë1 + a2 ¦Ë2 2 ¦Ê2 ¦Ê2 2 ¦Ê 2 subject to ¦Ë1 b1 ? 2a1 + ¦Ë2 b2 ? 2a2 + ¦Ë1 ¦Ò ?1 ? a2 1 2 2 2 C C C ¦Ê2 2 + ¦Ë2 ¦Ò ? ? a2 (4.24) 2 2 ? 2a1 a2 ¦Ë1 ¦Ë2 ? 1 ¡Ü 0 C2 2 ¦Ë 1 , ¦Ë2 > 0 . We can calculate the solution for this problem in the same way as in the previous section. Without rewriting all the passages concerning the respectively real quadratic form q , its matrix and the orthogonal change of variables ¨C which bring us to a form of the constraint corresponding to an ellipse ¨C we obtain that the optimal value of the function to maximize is reached by ¡Ì a1 ¦Ò ?2 ¦Ê s + ¦Ê2 ¦Ò ?2 b1 + C 2 a1 a2 b2 ? a2 ?2 2 b1 ? 2a1 ¦Ò ? ¦Ë1 = 2 C 2 a2 ? 2 + a2 ?1 ? ¦Ê2 ¦Ò ?1 ¦Ò ?2 1¦Ò 2¦Ò and ¦Ë? 2 = ¡Ì a2 ¦Ò ?1 ¦Ê s + ¦Ê2 ¦Ò ?1 b2 + C 2 a1 a2 b1 ? a2 ?1 1 b2 ? 2a2 ¦Ò 2 C 2 a2 ?2 + a2 ?1 ? ¦Ê2 ¦Ò ?1 ¦Ò ?2 1¦Ò 2¦Ò maximize

,

where we use s as short notation for s=

2 ¦Ê2 ¦Ò ? 1 b2 ?2 ? a2 ¦Ò ?1 b2 ? a1 ¦Ò ?2 b1 ? a1 b2 ? a2 b1 ? 2 b2 4 ¦Ò ?1 ¦Ò 1+¦Ò 2 +C 2

a2 ? 2 + a2 ?1 1¦Ò 2¦Ò

.

All these argumentations are also valid for an n-dimensional problem. Consequently, the initial optimization problem (4.21) has optimal solutions for every model of the form of (4.20), but because of the length of the expression we will not list general solutions.

4.4

The third variant of the model: Ni is the sum of a positive integer and a random variable for i ¡Ê {1, . . . , n}

In this section, we consider a di?erent variation of the model examined in this work and de?ned in Section 2.5. This new variation is based on the

50

CHAPTER 4. STANDARD DEVIATION

de?nition of Ni , i.e. the variables which represent the number of contracts of every unit i of the company, for i ¡Ê {1, . . . , n}. We de?ne Ni = Ni?x + Nipois for i ¡Ê {1, . . . , n}.

This means that the number of contracts of any unit i is de?ned as the sum of a ?xed integer Ni?x and a Poisson-distributed random variable Nipois with parameter ¦Ëi ¡Ý 0. We observe that this variant of the model includes both versions we considered previously. In fact, if we set Ni?x = 0 for all i ¡Ê {1, . . . , n}, we obtain the same variant as those examined in Section 4.3 or, if we set ¦Ëi = 0 for all i ¡Ê {1, . . . , n}, we have the initial model considered in Sections 4.1 and 4.2. We represent the whole portfolio through

n

?x +N pois Ni i

R=

i=1

¦Íi Ni?x + Nipois ?

j =1

Xi,j ? Yi Ni?x + Nipois

,

with the assumptions mentioned. In particular, we remember that {Xi,j }j ¡ÊN with i ¡Ê {1, . . . , n} are independent sequences of i.i.d. random variables, independent of the random variables Y1 , . . . , Yn and that Y1 , . . . , Yn are independent, too. Moreover, N1 , . . . , Nn are assumed to be independent of all the sequences {Xi,j }j ¡ÊN with i ¡Ê {1, . . . , n} and of Y1 , . . . , Yn . Analogously, we de?ne the ?nite means and ?nite variances of Xi,j and Yi , respectively, by

2 = Var(Xi,j ), for all j ¡Ê N and all i ¡Ê {1, . . . , n} , ?i = E [Xi,j ] and ¦Òi 2 ? ?i = E [Yi ] and ¦Ò ?i = Var(Yi ), for all i ¡Ê {1, . . . , n}.

Our interest turns to the following optimization problem in order to determine the portfolio which guarantees a maximal return. maximize E [R] , C subject to ¦Ñ(R) ¡Ü C, ¦Ëi ¡Ý 0 for all i ¡Ê {1, . . . , n} , Ni?x ¡Ý 0, integers for all i ¡Ê {1, . . . , n} . We will consider only the simplest case with n = 1, because the form of Ni already entails solving a two-dimensional optimization problem. A solution for a more general problem can be computed with the same argumentations, which can be easily adapted. For the sake of practicality, we rewrite the risk R as follows

? N

(4.25)

?? R = ¦ÍN

j =1

?, Xj ? Y N

4.4. THE THIRD VARIANT OF THE MODEL where ? = N + Np , N

51

with N denoting some positive integer and N p a Poisson-distributed random variable with parameter ¦Ë ¡Ý 0. Using the formulae cited in the previous section, we compute E [R] = (N + ¦Ë)(¦Í ? ? ? ? ?) ,

N +N p

Var(R) = Var ¦Í (N + N ) + Var

j =1 N +N p

p

Xj

+ Var Y (N + N p )

N +N p

? 2 Cov ¦Í (N + N p ),

j =1 p

Xj

p

+ 2 Cov

j =1

Xj , Y (N + N p )

? 2 Cov ¦Í (N + N ), Y (N + N ) = ¦Ë (¦Í ? ? ? ? ?)2 + ¦Ò 2 + ¦Ò ? 2 + N ¦Ò 2 + (N 2 + ¦Ë2 )? ¦Ò2 . To simplify the calculations we introduce the constants a = (¦Í ? ? ? ? ?)/C , b = (¦Í ? ? ? ? ?)2 + ¦Ò 2 + ¦Ò ?2 . Moreover, as in previous sections we assume ¦Ê¦Ò ? /C > a. Therefore, the optimization problem can be written as maximize subject to a(N + ¦Ë) ? a(N + ¦Ë) + ¦Ë ¡Ý 0, N ¡Ý 0 . If we square the main constraint we obtain maximize a(N + ¦Ë) (4.27) ¦Ê2 2 ¦Ê2 ¦Ê2 2 ? ? a2 + ¦Ë 2 b ? 2a + N 2 ¦Ò ? ? a2 subject to ¦Ë2 2 2 ¦Ò 2 C C C ¦Ê2 2 ¦Ê2 2 +N ¦Ò ? 2a ? 2N ¦Ë 2 ¦Ò ? ? 2a ? 1 ¡Ü 0, C2 C ¦Ë ¡Ý 0, N ¡Ý 0. ¦Ê C ¦Ëb + N ¦Ò 2 + (N 2 + ¦Ë2 ) ¦Ò ?2 ¡Ü 1 , (4.26)

Then, if we consider only the constraint, we can rewrite it in a new form related to an ellipse. In fact, introducing a quadratic form

2 ¦Ê2 2 ¦Ê2 2 2 2 ¦Ê 2 2 q = ¦Ë 2 2¦Ò ? ?a +N ¦Ò ? ? a ? 2N ¦Ë 2 ¦Ò ? ? 2a C C2 C 2

52

CHAPTER 4. STANDARD DEVIATION

and computing its corresponding matrix A, we can determine an orthogonal matrix P with the orthonormal eigenvectors of A in its columns. Then, by a change of variables from N, ¦Ë to x, y according to N ¦Ë =P x y ,

we obtain that the constraint can be written as ¦Â1 x + ¦Â2 y + ¦Á1 x2 + ¦Á2 y 2 ¡Ü 1. We observe that we get the same result as in the previous sections. The only di?erences are the values of the constants, but, in any case, the way to proceed is similar. Without writing all the calculations needed to compute these values, we can now solve the optimization problem as we did in Section 4.2, and we obtain that the optimal solution is N? = ? 2 2¦Ò 2 + b ? aC 2 ¦Ê2 6? ¦Ò 2 ? a(b + ¦Ò 2 ) + 4a3 C 4 ¦Ê4 ¦Ò 2¦Ê2 ¦Ò ? 2 a2 C 2 ? ¦Ê2 ¦Ò ?2 ¡Ì 3¦Ê2 ¦Ò ? 2 ? 2a2 C 2 s + 2 2 2 2 2¦Ê ¦Ò ? a C ? ¦Ê2 ¦Ò ?2 ¡Ì 4aC 2 ? ¦Ê2 ¦Ò 2 + b ? 2 s , ¦Ë = 2¦Ê2 ¦Ò ?2

?

and

where we use s as short notation for s= ? 2 ¦Ê2 C 4 ? (¦Ò 2 + b)2 + ¦Ò 4 ¦Ò ? 2 ¦Ê6 16a4 C 6 ? 8 a3 (¦Ò 2 + b) + 2a2 ¦Ò 4a2 C 2 ? 5¦Ê2 ¦Ò ?2 a2 (¦Ò 2 + b)2 + 4a(2b + 3¦Ò 2 )? ¦Ò 2 ? 4? ¦Ò 4 ¦Ê4 C 2 + . 4a2 C 2 ? 5¦Ê2 ¦Ò ?2

Chapter 5 Capital allocation

An insurance company has a total target return and spreads down the total return to di?erent portfolios when establishing a business plan. The company is also interested in comparing the individual returns ri = E [Ri ]/Ci of the various business units i ¡Ê {1, . . . , n}. The vital question concerning this is: how can one choose Ci ? The idea is to introduce a capital allocation principle and to split up the risk capital amongst the various business units, but there is no general answer to the question of how the risk capital should be allocated. There are di?erent classes of capital allocation methodologies, and a formal description can be found in Albrecht (1997). In the following sections, we consider two di?erent capital allocation principles, namely the covariance principle and the expected shortfall principle.

5.1

Covariance principle

We now want to calculate the well-known covariance principle for the special case of RAC(R) = ?E [R] + ¦Ê¦Ò (R) with ¦Ê > 0 and with R having the following form

n n Ni

R=

i=1

Ri =

i=1

¦Íi Ni ?

j =1

Xi,j ? Yi Ni

where ? N1 , . . . , Nn are positive integers, ? {Xi,1 , Xi,2 , . . . , Xi,Ni } are ?nite sequences of i.i.d. random variables with ?nite means and ?nite variances for all i ¡Ê {1, . . . , n} with

2 ?i = E [Xi,j ] and ¦Òi = Var(Xi,j ),

53

54

CHAPTER 5. CAPITAL ALLOCATION ? the linear correlation coe?cients satisfy ¦Ñik = corr(Xi,j , Xk,l ) for all i, k ¡Ê {1, . . . , n}, i = k and j ¡Ê {1, . . . , Ni }, l ¡Ê {1, . . . , Nk }, ? Y1 , . . . , Yn are random variables with ?nite means and ?nite variances

2 ? ?i = E [Yi ] and ¦Ò ?i = Var(Yi ),

? the sequences {Xi,1 , . . . , Xi,Ni }, for every i ¡Ê {1, . . . , n}, are independent of the random variables Y1 , . . . , Yn . Remark. There are restrictions on the correlation coe?cients ¦Ñik with i, k in {1, . . . , n}, i = k . The covariance matrix of the (N1 + ¡¤ ¡¤ ¡¤ + Nn )-dimensional random vector (X1,1 , . . . , X1,N1 , X2,1 , . . . , X2,N2 , . . . , Xn,1 , . . . , Xn,Nn ) has to be positive semide?nite. If this is the case, then there exist at least multivariate normally distributed random variables with the prescribed dependence structure. In general it holds that Ci = C Cov(Ri , R) , Var(R)

where C denotes the capital the company wants to invest and Ci is the capital the company will allocate to business unit i. Let us compute the enumerator and the denominator separately. On one side, we have that, for the enumerator, it holds that

n

Cov(Ri , R) = Cov Ri ,

k=1

Rk

n

= Var(Ri ) + Cov Ri ,

k=1 k =i Ni

Rk

n

= Var ¦Íi Ni ?

j =1

Xi,j ? Yi Ni +

k=1 k =i n

Cov Ri , Rk

2 2 + Ni2 ¦Ò ?i + = Ni ¦Òi k=1 k =i

Ni Nk ¦Ñik ¦Òi ¦Òk + ¦Ñ ?ik ¦Ò ?i ¦Ò ?k ,

5.1. COVARIANCE PRINCIPLE since, for any k = i,

Ni Nk

55

Cov Ri , Rk = Cov ¦Íi Ni ?

j =1 Ni

Xi,j ? Yi Ni , ¦Ík Nk ?

l=1 Nk

Xk,l ? Yk Nk

= Cov

j =1 Ni Nk

Xi,j + Yi Ni ,

l=1

Xk,l + Yk Nk

=

j =1 l=1

Cov Xi,j , Xk,l

=¦Ñik ¦Òi ¦Òk

+ Ni Nk Cov Yi , Yk ,

=? ¦Ñik ¦Ò ?i ¦Ò ?k

where ¦Ñ ?ik denotes the linear correlation coe?cient, in particular for i, k in {1, . . . , n} with i = k , ¦Ñ ?ik = corr Yi , Yk . On the other side, for the denominator, it holds that

n

Var(R) = Var

j =1 n

Rj

n?1 n

=

j =1 n

Var Rj + 2

j =1 k=j +1 2 Nj ¦Òj j =1 2 Nj2 ¦Ò ?j

Cov Rj , Rk

n?1 n

=

+

+2

j =1 k=j +1

Nj Nk ¦Ñjk ¦Òj ¦Òk + ¦Ñ ?jk ¦Ò ?j ¦Ò ?k .

Therefore, it follows that Ci = C

2 2 + Ni2 ¦Ò ?i + Ni ¦Òi n j =1 2 2 + Nj2 ¦Ò ?j Nj ¦Òj +2 n k=i Ni Nk ¦Ñik ¦Òi ¦Òk n?1 n j =1 k=j +1 Nj Nk

+¦Ñ ?ik ¦Ò ?i ¦Ò ?k ?jk ¦Ò ?j ¦Ò ?k ¦Ñjk ¦Òj ¦Òk + ¦Ñ

.

Remark. In the case where {Xi,j }j ¡ÊN are independent sequences consisting of i.i.d. random variables and Y1 , . . . , Yn are independent too, it holds that all correlation coe?cients are zero. Therefore, it follows that Ci = C

2 2 Ni ¦Òi + Ni2 ¦Ò ?i . n 2 2 2 ?j j =1 Nj ¦Òj + Nj ¦Ò

We now consider the second variant of the initial model. In fact, we assume that all Ni , with i ¡Ê {1, . . . , n}, which denote the number of contracts for every unit i, are Poisson-distributed random variables with parameters

56

CHAPTER 5. CAPITAL ALLOCATION

¦Ëi > 0. To recapitulate, we will calculate the covariance principle for R having the following form

n n Ni

R=

i=1

Ri =

i=1

¦Íi Ni ?

j =1

Xi,j ? Yi Ni ,

where ? N1 , . . . , Nn are Poisson-distributed random variables, i.e., Ni ? POIS(¦Ëi ) with ¦Ëi > 0, ? {Xi,j }j ¡ÊN are uncorrelated sequences of i.i.d. random variables with ?nite means and ?nite variances for all i ¡Ê {1, . . . , n}

2 = Var(Xi,j ), ?i = E [Xi,j ] and ¦Òi

? Y1 , . . . , Yn are random variables with ?nite means and ?nite variances

2 ?i = Var(Yi ), ? ?i = E [Yi ] and ¦Ò

? all the sequences {Xi,j }j ¡ÊN with i ¡Ê {1, . . . , n} are independent of the random variables Y1 , . . . , Yn , ? N1 , . . . , Nn are assumed to be independent of all the sequences {Xi,j }j ¡ÊN with i ¡Ê {1, . . . , n} and of Y1 , . . . , Yn . Remark. Consider sequences {Ui }i¡ÊN and {Vi }i¡ÊN of i.i.d. random variables. For simplicity, assume that E [Ui ] = E [Vi ] = 0 and Var(Ui ) = Var(Vi ) = 1 for all i ¡Ê N. If the correlation ¦Ñ = Cov(Ui , Vj ) does not depend on i, j ¡Ê N, then ¦Ñ has to be zero. To see this, ?x n ¡Ê N. Then

n

0 ¡Ü Var

i=1 n

(Ui + Vi )

n

=

i=1

Var(Ui ) + Var(Vi ) + 2

i,j =1

Cov(Ui , Vj )

= 2n + 2n2 ¦Ñ implies ¦Ñ ¡Ý ?1/n. Similarly,

n

0 ¡Ü Var

i=1

(Ui ? Vi )

= 2n ? 2n2 ¦Ñ,

hence ¦Ñ ¡Ü 1/n. Since n ¡Ê N was arbitrary, ¦Ñ = 0. For this reason, the above sequences {Xi,j }j ¡ÊN for i ¡Ê {1, . . . , n} are assumed to be uncorrelated.

5.1. COVARIANCE PRINCIPLE In the above model it holds that Ci = C with

n

57

Cov(Ri , R) , Var(R)

Cov(Ri , R) = Var(Ri ) + Cov Ri ,

k=1 k =i

Rk

n

=

2 ?i ¦Ë2 i¦Ò

+ ¦Ëi (¦Íi ? ?i ? ? ?i ) +

2

2 ¦Òi

+

2 ¦Ò ?i

+

?2 i

+

k=1 k =i

Cov(Ri , Rk )

and

n n?1 n

Var(R) =

i=1

Var(Ri ) + 2

i=1 k=i+1

Cov(Ri , Rk ) .

In detail, we ?rst calculate the covariance between Ri and Rk for i = k separately:

Ni Nk

Cov(Ri , Rk ) = Cov ¦Íi Ni ?

j =1

Xi,j ? Yi Ni , ¦Ík Nk ?

l=1 Nk

Xk,l ? Yk Nk

= Cov(¦Íi Ni , ¦Ík Nk ) ? Cov ¦Íi Ni ,

l=1 Ni

Xk,l ? Cov(¦Íi Ni , Yk Nk )

Ni Nk

? Cov

j =1 Ni

Xi,j , ¦Ík Nk + Cov

j =1

Xi,j ,

l=1

Xk,l

+ Cov

j =1

Xi,j , Yk Nk ? Cov(Yi Ni , ¦Ík Nk )

Nk

+ Cov Yi Ni ,

l=1

Xk,l + Cov(Yi Ni , Yk Nk ) ,

with ?ik Cov(¦Íi Ni , ¦Ík Nk ) = ¦Íi ¦Ík ¦Ñ ¦Ëi ¦Ëk ,

58

Nk

CHAPTER 5. CAPITAL ALLOCATION

Nk

Cov ¦Íi Ni ,

l=1

Xk,l = ¦Íi Cov Ni ,

l=1 Nk

Xk,l

Nk

= ¦Íi E Ni

l=1

Xk,l ? E Ni E

l=1

Xk,l

= ¦Íi E Ni Nk ?k ? ¦Ëi ¦Ëk ?k = ¦Íi ?k Cov(Ni , Nk ) = ¦Íi ?k ¦Ñ ?ik ¦Ëi ¦Ëk ,

Cov(¦Íi Ni , Yk Nk ) = ¦Íi E Ni Yk Nk ? E Ni E Yk Nk = ¦Íi ? ?k Cov(Ni , Nk ) = ¦Íi ? ?k ¦Ñ ?ik

Ni Ni

¦Ëi ¦Ëk ,

Ni

Cov

j =1

Xi,j , ¦Ík Nk = ¦Ík E Nk

j =1

Xi,j ? E Nk E

j =1

Xi,j

= ¦Ík ?i ¦Ñ ?ik

Ni Nk Ni

¦Ëi ¦Ëk ,

Nk Ni Nk

Cov

j =1

Xi,j ,

l=1

Xk,l = E

j =1

Xi,j

l=1

Xk,l ? E

j =1

Xi,j E

l=1

Xk,l

= E N i Nk ? i ?k ? ?i ?k ¦Ë i ¦Ë k = ?i ?k Cov Ni , Nk =¦Ñ ?ik

Ni

¦Ë i ¦Ë k ? i ?k ,

Cov

j =1

Xi,j , Yk Nk = E Ni Nk ?i ? ? k ? ¦Ë i ?i ¦Ëk ? ?k = ?i ? ?k Cov(Ni , Nk ) = ?i ? ?k ¦Ñ ?ik ¦Ëi ¦Ëk ,

Cov(Yi Ni , ¦Ík Nk ) = ¦Ík E Yi Ni Nk ? E Yi Ni E Nk = ¦Ík ? ?i Cov(Ni , Nk ) = ¦Ík ? ?i ¦Ñ ?ik

Nk

¦Ëi ¦Ëk ,

Cov Yi Ni ,

l=1

Xk,l = E Ni Nk ? ? i ?k ? ¦Ë i ? ? i ¦Ë k ?k =? ?i ?k Cov(Ni , Nk ) = ? ? i ?k ¦Ñ ?ik ¦Ëi ¦Ëk ,

5.1. COVARIANCE PRINCIPLE ? k ¦Ëi ¦Ëk ?i ? Cov(Yi Ni , Yk Nk ) = E Yi Ni Yk Nk ? ? ?i ? = E Y i Yk E N i Nk ? ? ? k ¦Ëi ¦Ëk = Cov Yi , Yk Cov Ni , Nk + ? ?k Cov Ni , Nk ?i ? + ¦Ëi ¦Ëk Cov Yi , Yk =¦Ñ ?ik ¦Ò ?i ¦Ò ?k ¦Ñ ?ik ¦Ëi ¦Ëk + ¦Ñ ?ik +¦Ñ ?ik ¦Ò ?i ¦Ò ? k ¦Ëi ¦Ëk , ¦Ëi ¦Ëk ? ?i ? ?k

59

?ik we denote the linear correlation coe?cients between where with ¦Ñ ?ik and ¦Ñ the dependent random variables, i.e., ¦Ñ ?ik = corr Ni , Nk respectively. It follows that ?ik Cov(Ri , Rk ) = ¦Ñ ¦Ëi ¦Ëk ¦Íi ? ?i ? ? ?i ¦Ík ? ?k ? ? ?k + ¦Ñ ?i ¦Ò ?k ?ik ¦Ò and ¦Ñ ?ik = corr Yi , Yk ,

?ik ¦Ò ?i ¦Ò ?k . + ¦Ëi ¦Ëk ¦Ñ A similar calculation with i = k for the nine terms given above gives similar ¡Ì 2 ?ik ¦Ò ?i ¦Ò ?k replaced by ¦Ò ?i . However, results with ¦Ñ ?ik ¦Ëi ¦Ëk replaced by ¦Ëi and ¦Ñ for the ?fth term we get

Ni Ni Ni N1 2 Xi,j j =1 2

Var

j =1

Xi,j

=E

j,l=1 j =l

Xi,j Xi,l + E

? E

j =1

Xi,j

2 2 = ?2 i E [Ni (Ni ? 1)] + E [Ni ] E [Xi,1 ] ? (E [Ni ] ?i ) 2 2 2 2 = ¦Ë2 i ?i + ¦Ëi E [Xi,1 ] ? ¦Ëi ?i 2 = ¦Ëi ¦Òi + ¦Ëi ?2 i,

hence

2 2 2 Var(Ri ) = ¦Ë2 ?i + ¦Ëi (¦Íi ? ?i ? ? ?i )2 + ¦Òi +¦Ò ?i + ?2 i¦Ò i .

Therefore, we can conclude that

2 2 2 Cov(Ri , R) = ¦Ë2 ?i + ¦Ëi (¦Íi ? ?i ? ? ?i )2 + ¦Òi +¦Ò ?i + ?2 i¦Ò i n

+

k=1 k =i

¦Ñ ?ik

¦Ëi ¦Ëk

?i ¦Ík ? ?k ? ? ?k + ¦Ñ ?i ¦Ò ?k ?ik ¦Ò ¦Íi ? ?i ? ?

?ik ¦Ò ?i ¦Ò ?k , + ¦Ë i ¦Ëk ¦Ñ

60 and

n n?1

CHAPTER 5. CAPITAL ALLOCATION

n

Var(R) =

i=1 n

Var(Ri ) + 2

i=1 k=i+1

Cov(Ri , Rk ) +

=

i=1

2 2 2 ?i + ¦Ëi (¦Íi ? ?i ? ? ?i )2 + ¦Òi +¦Ò ?i + ?2 ¦Ë2 i¦Ò i n?1 n

+2

i=1 k=i+1

?ik ¦Ò ?i ¦Ò ?k ¦Ëi ¦Ëk ¦Ñ ¦Ëi ¦Ëk ?i ¦Ík ? ?k ? ? ?k + ¦Ñ ?i ¦Ò ?k ?ik ¦Ò ¦Íi ? ?i ? ? .

+¦Ñ ?ik

Remark. Note that where N1 , . . . , Nn are independent random variables, all the {Xi,j }j ¡ÊN with i ¡Ê {1, . . . , n} are uncorrelated sequences and the random variables Y1 , . . . Yn are also independent, it holds that all correlation coe?cients are zero. Thus, it follows that Ci = C

2 2 2 ¦Ë2 ?i + ¦Ëi (¦Íi ? ?i ? ? ?i )2 + ¦Òi +¦Ò ?i + ?2 i¦Ò i n j =1 2 2 2 ?j + ¦Ëj (¦Íj ? ?j ? ? ?j )2 + ¦Òj +¦Ò ?j + ?2 ¦Ë2 j¦Ò j

.

5.2

Expected-shortfall principle

An alternative to the covariance principle for the purpose of allocation of risk capital are the conditional expectations. Let R1 , . . . , Rn be the stochastic gains of the business units and R = n i=1 Ri the whole pro?t and loss of an insurance company. Let c ¡Ü 0 be the capital loss threshold, for example, the ¦Á-quantile r¦Á of R. If a total loss R ¡Ü c occurs, we consider the expected shares E [?Ri | R ¡Ü c] of the single losses with respect to the total loss. Obviously, it holds that

n

E [?R | R ¡Ü c] =

i=1

E [?Ri | R ¡Ü c] ,

where E [?R | R ¡Ü c] denotes the risk capital of the entire company, while E [?Ri | R ¡Ü c] denotes the risk capital assigned to business unit i. This risk capital allocation principle is additive, as is the covariance principle, but in addition it can be applied for integrable random variable, so the existence of second moments is no longer required. In order to compute these conditional expectations we can use the procedures proposed in Appendix A.

5.2. EXPECTED-SHORTFALL PRINCIPLE

61

As an example, we consider R having the usual form for a company consisting of two units, i.e.

N1 N2

R = R1 + R2 = ¦Í1 N1 ?

j =1

X1,j ? Y1 N1 + ¦Í2 N2 ?

j =1

X2,j ? Y2 N2 ,

with the following assumptions: ? N1 and N2 are positive integers, ? the sequences {X1,j }j ¡ÊN and {X2,j }j ¡ÊN are independent and consist of independent, identically normally distributed random variables, i.e.,

2 Xi,j ? N ?i , ¦Òi ,

for i = 1, 2 and j ¡Ê N,

? Y1 and Y2 are independent normally distributed random variables, i.e.,

2 Yi ? N ? ?i , ¦Ò ?i ,

for i = 1, 2,

? the sequences {X1,j }j ¡ÊN and {X2,j }j ¡ÊN are independent of Y1 and Y2 . It is known that, given the independent random variables Xi,j that are normally distributed for i = 1, 2 and all j ¡Ê N, the random variables R1 , R2 and R are also normally distributed, i.e.,

2 R1 ? N ? ?1 , ¦Ò ?1 , 2 ?2 , ¦Ò ?2 R2 ? N ? 2 and R ? N ?R , ¦ÒR ,

where we de?ne ? ?i = Ni ¦Íi ? ?i ? ? ?i , and ?R = ? ?1 + ? ?2 ,

2 2 2 ¦ÒR =¦Ò ?1 +¦Ò ?2 . 2 2 2 = Ni ¦Òi + Ni2 ¦Ò ?i , ¦Ò ?i

for i = 1, 2 ,

We denote the density of the standard normal distribution by

1 2 1 e? 2 u , ?(u) = ¡Ì 2¦Ð

u ¡Ê R,

and the standard normal distribution function by

t

¦µ(t) =

?¡Þ

?(u) du,

t ¡Ê R.

62 Note that

CHAPTER 5. CAPITAL ALLOCATION

t ? ?R . ¦ÒR We are now interested in the measures of risk P R¡Üt =¦µ E Ri | R ¡Ü c for a given value c ¡Ê R and i = 1, 2 . Using formula (A6) from Appendix A, we obtain ¦Ò ?2 c ? ?R ?i ? i (log ¦µ) . E [Ri | R ¡Ü c] = ? ¦ÒR ¦ÒR

Appendix A Calculating expected shortfall

In the following pages we will introduce some rules which will be seen to be useful to calculate the expected shortfall. Let X be an integrable random variable with distribution function FX . Then, for all c ¡Ê R such that FX (c) < 1, it holds that E X |X > c = 1 1 ? FX (c)

¡Þ

x FX (dx).

c

(A.1)

Let X1 , . . . , Xn be exchangeable1 integrable random variables and let n X = i=1 Xi . Then, for all c ¡Ê R such that P (X > c) > 0 and for all i, j ¡Ê {1, . . . , n}, it holds that E Xi | X > c = E Xj | X > c . Since the conditional expectation is linear, for all i ¡Ê {1, . . . , n}, it follows that 1 E Xi | X > c = E X | X > c n and this last conditional expectation can be solved by means of (A.1). Let X and Y be independent integrable random variables with distribution functions FX and FY , respectively, and let c ¡Ê R such that P (X + Y > c) > 0. It then holds that E X |X + Y > c = 1 1 ? FX ? FY (c) x 1 ? FY (c ? x) FX (dx) , (A.2)

Let I be a countable set. A sequence Xi , i ¡Ê I of random variables on a probability space ?, F , P is exchangeable if, for every permutation ¦Ñ of I , the distributions of X¦Ñ(i) , i ¡Ê I and Xi , i ¡Ê I are identical. Note that a ?nite or in?nite i.i.d. sequence is exchangeable.

1

R

63

64

APPENDIX A. CALCULATING EXPECTED SHORTFALL

where FX ? FY denotes the convolution of the distribution functions FX and FY , that is, the distribution function of the sum X + Y , i.e., P (X + Y > c) = 1 ? FX ? FY (c) . This result is generalizable to independent random variables X1 , . . . , Xn with distribution functions F1 , . . . , Fn , respectively. Let X = X1 and Y = X2 + ¡¤ ¡¤ ¡¤ + Xn . X and Y are likewise independent with distribution functions FX = F1 and FY = F2 ? ¡¤ ¡¤ ¡¤ ? Fn , respectively. The conditional expectation E X1 | X1 + ¡¤ ¡¤ ¡¤ + Xn > c can be then solved by means of (A.2). Let 1 2 t ¡Ê R, ?(t) = ¡Ì e?t /2 , 2¦Ð and ¦µ(x) =

?¡Þ x

?(t) dt ,

x ¡Ê R,

denote the density and the distribution function of the standard normal distribution, respectively. Given X ? N (0, 1) and c ¡Ê R, we obtain with the substitution z = x2 /2 E [X | X ¡Ü c] = 1 ¦µ(c)

c

x?(x) dx

?¡Þ ¡Þ c2 / 2

= ?¡Ì

1 2¦Ð ¦µ(c)

2

e?z dz

e?c /2 = ?¡Ì 2¦Ð ¦µ(c) ?(c) =? = ? log ¦µ (c). ¦µ(c) If Y ? N (?, ¦Ò 2 ), then X := Y ?? ? N (0, 1) ¦Ò

and with the above result we obtain E [Y | Y ¡Ü c] = E [? + ¦ÒX | ? + ¦ÒX ¡Ü c] = ? + ¦ÒE [X | X ¡Ü (c ? ?)/¦Ò ] c?? = ? ? ¦Ò log ¦µ . ¦Ò

(A.3)

65 In particular, for c = 0, ? E [Y ] = E [?Y | Y ¡Ü 0] ?? + ¦Ò log ¦µ 1 = ?1 + ¦Ò log ¦µ ?

?? ¦Ò ?? ¦Ò

.

Now consider independent random variables X, Y ? N (0, 1) and constants c ¡Ê R, ¦Ò > 0. To compute E [X | ¦ÒX + Y ¡Ü c], de?ne ¦Ã=¡Ì and note that P (¦ÒX + Y ¡Ü c) = ¦µ(¦Ã ) ¡Ì because (¦ÒX + Y )/ 1 + ¦Ò 2 ? N (0, 1). By conditioning on the ¦Ò -algebra generated by X , E [X | ¦ÒX + Y ¡Ü c] = 1 E E [X 1{¦ÒX +Y ¡Üc} | X ] ¦µ(¦Ã ) 1 = E [XP (Y ¡Ü c ? ¦ÒX | X )] . ¦µ(¦Ã ) c 1 + ¦Ò2

Since X and Y are independent, P (Y ¡Ü c ? ¦ÒX | X ) = ¦µ(c ? ¦ÒX ) P -almost surely. Since x?(x) = ?? (x), partial integration gives E [X ¦µ(c ? ¦ÒX )] = Since x2 + (c ? ¦Òx)2 = (1 + ¦Ò 2 )x2 ? 2c¦Òx + c2 = ¡Ì the substitution u = 1 + ¦Ò 2 x ? ¦Ò¦Ã yields

R R

x?(x)¦µ(c ? ¦Òx) dx = ?¦Ò ¡Ì

R

?(x)?(c ? ¦Òx) dx .

1 + ¦Ò 2 x ? ¦Ò¦Ã

2

+ ¦Ã2 ,

?(x)?(c ? ¦Òx) dx = ?(¦Ã )

?

R

¡Ì

1 + ¦Ò 2 x ? ¦Ò¦Ã dx

?(¦Ã ) ?(u) du =¡Ì 1 + ¦Ò2 R ?(¦Ã ) =¡Ì . 1 + ¦Ò2

66 Therefore,

APPENDIX A. CALCULATING EXPECTED SHORTFALL

E [X | ¦ÒX + Y ¡Ü c] = ? ¡Ì

?(¦Ã ) ¦Ò 1 + ¦Ò 2 ¦µ(¦Ã ) ¦Ò = ?¡Ì (log ¦µ) 1 + ¦Ò2

c ¡Ì . 1 + ¦Ò2

(A5)

2 2 More generally, for independent X ? N (?X , ¦ÒX ) and Y ? N (?Y , ¦ÒY ) with ¦ÒX > 0 and ¦ÒY > 0, we obtain

E [X | X + Y ¡Ü c] = ?X + ¦ÒX E Since X ? ?X ¦ÒX c ? ?X ? ? Y ¦ÒX X ? ?X Y ? ?Y + ¡Ü . ¦ÒY ¦ÒX ¦ÒY ¦ÒY

X ? ? X Y ? ?Y , ? N (0, 1) , ¦ÒX ¦ÒY

equation (A5) gives E [X | X + Y ¡Ü c] = ?X ? ¦ÒX = ?X ? ¦ÒX /¦ÒY 1 + (¦ÒX /¦ÒY )2 ¦ÒY 1 + (¦ÒX /¦ÒY )2 2 c ? ?X ? ? Y ¦ÒX . (log ¦µ) 2 2 2 2 ¦ÒX + ¦ÒY ¦ÒX + ¦ÒY (log ¦µ) c ? ?X ? ? Y (A6)

Appendix B The Lagrange multipliers rule

We consider the general case. Let m and n be natural numbers and U be an open subset of Rn+m . Suppose x ¡Ê Rn , y ¡Ê Rm , f : U ¡ú R and g : U ¡ú Rm . Let us examine the problem of ?nding the extrema of f (x, y ) = f (x1 , . . . , xn ; y1 , . . . , ym ) subject to the m constraints: g1 (x1 , . . . , xn ; y1 , . . . , ym ) = 0 , . . . . . . gm (x1 , . . . , xn ; y1 , . . . , ym ) = 0 .

Theorem. Let m, n and U be de?ned as above. Suppose that f ¡Ê C 1 (U, R) and g ¡Ê C 1 (U, Rm ). Moreover, assume that (¦Î, ¦Ç ) ¡Ê M = {(x, y ) ¡Ê U | g (x, y ) = 0} is such that the restriction f |M of f to M has a local extremum at (¦Î, ¦Ç ); this means that there is some neighborhood V of (¦Î, ¦Ç ) such that either f (x, y ) ¡Ü f (¦Î, ¦Ç ) for all (x, y ) ¡Ê V ¡É M or f (x, y ) ¡Ý f (¦Î, ¦Ç ) for all (x, y ) ¡Ê V ¡É M . Suppose also that the n by n matrix ?g (¦Î, ¦Ç )/?y has nonzero determinant. Then, a vector ¦Ë = (¦Ë1 , . . . , ¦Ëm ) ¡Ê Rm exists such that the function H (x, y, z ) = f (x, y ) + z, g (x, y )

m

= f (x, y ) +

j =1

zj gj (x, y ) ,

(x, y, z ) ¡Ê U ¡Á Rm ,

has a critical point in (¦Î, ¦Ç, ¦Ë). Proof. See Walter (1990), pp. 130¨C133. 67

68

APPENDIX B. THE LAGRANGE MULTIPLIERS RULE

As consequence, in the point (¦Î, ¦Ç, ¦Ë) it holds that Hxi = fxi + ¦Ë, gxi = 0 , i = 1, . . . , n , Hyk = fyk + ¦Ë, gyk = 0 , k = 1, . . . , m , H¦Ëk = gk = 0 , k = 1, . . . , m .

(B.1)

The variables ¦Ë1 , . . . , ¦Ën are called Lagrange multipliers. The virtue of this result is that if one seeks points (¦Î, ¦Ç ) at which f |M has local extrema, then one need only search among those (¦Î, ¦Ç ) ¡Ê M for which the system (B.1) has a solution ¦Ë.

Bibliography

Albrecht, P., (1997). Risk Based Capital Allocation and Risk Adjusted Performance Management in Property/Liability Insurance: A Risk Theoretical Framework. Joint Day Proocedings of the XXVIIIth International AFIR Colloquium & the 7th International AFIR Colloquium. Cairns, Australia, 13 August 1997. Arztner, P., Delbaen, F., Eber, J. M., Heath, D., (1998). Coherent measure of risk. Mathematical Finance, Volume 9/3, July 1999. Billingsley, P. (1968). Convergence of Probability Measures. John Wiley & Sohns, New York. Billingsley, P. (1979). Probability and Measures. John Wiley & Sohns, Inc. New York. Breiman, L. (1968). Probability. Addison-Wesley Publishing Company. Crouhy, M., Turnbull, S. M., Wakeman, L. M. (1999). Measuring riskadjusted performance. Journal of Risk. Volume 2/Number 1, Fall 1999. Denault, M. (1999). Coherent allocation of risk capital. RiskLab Research Paper. Available under http://www.risklab.ch/Papers.html . Embrechts, P., K¡§ uppelberg, C. and Mikosch, T.(1999). Modelling Extremal Events for Insurance and Finance. Springer Feller, W., (1966). An Introduction to Probability Theory and its Applications. Volume II. John Wiley & Sohns, Inc. Fristedt, B., Gray, L. (1997). A Modern Approach to Probability Theory. Birkh¡§ auser, Boston. Gilbert, J. and Gilbert, L. (1995). Linear Algebra and Matrix Theory. Academic Press, Inc. Grimmet, G. R. and Stirzaker, D. R. (1992). Probability and random processes. Oxford Science Publications, Oxford. 69

70

BIBLIOGRAPHY Johnson, N. L., Kotz, S., Balakrishnan, N. (1994). Continuous univariate Distribution. Volume 1, 2nd Edition. John Wiley & Sohn, Inc.. Karlin, S., Taylor, H. M. (1975). A ?rst course in stochastic processes. 2nd Edition, Academic Press, Inc. New York. Schmock, U. and Straumann, D. Allocation of risk capital and performance measurement. Working paper, private communication. Stromberg, K. R. (1981). An introduction to classical real analysis. Wadsworth International Group, Belmont, California. Tasche, D. (1999). Risk contributions and performance measurement. Research paper, Zentrum Mathematik (SCA), TU M¡§ unchen. Walter, W. (1990). Analysis II. Springer-Verlag.

All rights reserved Powered by ¿áÎÒ×ÊÁÏÍø koorio.com

copyright ©right 2014-2019¡£

ÎÄµµ×ÊÁÏ¿âÄÚÈÝÀ´×ÔÍøÂç£¬ÈçÓÐÇÖ·¸ÇëÁªÏµ¿Í·þ¡£zhit325@126.com

copyright ©right 2014-2019¡£

ÎÄµµ×ÊÁÏ¿âÄÚÈÝÀ´×ÔÍøÂç£¬ÈçÓÐÇÖ·¸ÇëÁªÏµ¿Í·þ¡£zhit325@126.com