Anestis Antoniadis
[Anestis.Antoniadis@imag.fr]
Title:
"Nonparametric estimation for the location of a change-point in an
otherwise smooth hazard function under random censoring"
A nonparametric wavelet based estimator is proposed for the location
of a change-point in an otherwise smooth hazard function under no informative
random right censoring. The proposed estimator is based on wavelet coefficients
differences via an appropriate parameterization of the time-frequency plane.
The study of the estimator is facilitated by a strong representation theorem
for the Kaplan-Meier estimator.
The performance of the estimator is checked via simulations and two
real examples conclude the talk.
Angelos Dassios
[A.Dassios@lse.ac.uk]
Title:
"Quintiles of Levy processes and applications in mathematical finance"
The study of the quintiles of Levy processes has produced the remarkable
result the distribution of such a quintile can be expressed as a convolution
of extremes of independent rescaled copies of the process.
We will discuss this result, its application in option pricing and
some possible extensions.
(We will also discuss numerical problems arising).
Paul Embrecht
[embrecht@math.ethz.ch]
Title:
"Key Issues Underlying Integrated Risk Management (IRM)"
The I in IRM stand on the one hand for the integration of various aspects of financial risk (market, credit, operational, liquidity) within a banking environment, but also for the integration of underwriting and investment risk in an insurance or bank-assurance context. Over the recent years, the development of quantitative IRM tools has undergone dramatic changes. The latter were accompanied by many successes (a much better understanding of financial risk management), but also some failures (i.e.LTCM).
In this talk I will stress the need for a well-understood quantitative
RM methodology, but at the same time express the need for a strong qualitative
understanding.
A key message from my talk will be the optimal working together of
actuaries, finance specialists and mathematicians (statisticians).
I will discuss these ideas using such standard concepts as Value-at-Risk
in Finance and Dynamic Financial Analysis in insurance.
A critical assessment of "what went wrong" in the LTCM case will be
given.
Vladimir Kalashnikov
[vkalash@math.ku.dk]
Title:
"Probabilistic methods in insurance mathematics"
A spectrum of insurance models requiring usage of probabilistic methods
is displayed. The mathematical background that is necessary to analyse
these models is discussed. A variety of examples and new problems associated
with stability of insurance models will also be considered.
Marianthi Markatou
[markat@stat.columbia.edu]
Title:
"Model Selection Based on Statistical Distances"
John McCutcheon
[J.J.McCutcheon@ma.hw.ac.uk]
Title:
"The new U.K. standard tables of mortality (for assured lives and
life-office pensioners)
and the Continuous Mortality Investigation Bureau mortality-improvement
model for pensioners"
"In July 1999 the Continuous Mortality Investigation Bureau (of the Faculty of Actuaries and the Institute of Actuaries) published its most recent series of mortality tables relating to assured lives and life-office pensioners. The complete set of new tables, which reflect the mortality experience of various groups of lives during the four-year period 1991-94, is known as 'The 92 Series'.
A study of the trends in the experiences of assured lives and life-office pensioners a show that for both groups mortality is continuing to improve.
The financial consequences of decreasing mortality among pensioner sand annuitants can be significant. Accordingly, when publishing its new tables for pensioners the CMIB also published projection factors to allow for improvements in mortality with the passage of time.
A brief description will be given of the CMIB mortality-improvement
model. Some numerical results will indicate some of the consequences for
financial institutions of steadily decreasing mortality rates.
Norberg Ragnar
[ragnar@math.ku.dk]
Title:
"Developments in life insurance mathematics"
George Roussas
[ggroussas@ucdavis.edu]
Title:
"Some Aspects of Nonparametric Estimation Under Dependence"
Much of statistical inference has been carried out under the basic assumption
that the underlying random variables are independent and identically distributed.
It so happens, however, that in many practical situations the independence
assumption is not sustainable. Instead, it has to be replaced by some kind
of dependence among the random variables involved. There have
been several modes of dependence employed, and the choice of suitable kind
may depend, to a certain degree,
on the nature of applications envisioned.
Some of the stochastic models used in the literature are those involving second-order stationarity, or strict stationarity, or ergodicity, or the martingale property; or a combination thereof. Various time series models have also been used extensively with great success. Another very popular class of stochastic models employed has been that of Markovian processes. An enlargement, in a certain sense, of this last class of models is provided by the various modes of mixing. Since the late 1960's and early 1970's, still another kind of dependence has found its way into the probabilistic/statistical literature, and almost independently, also in statistical mechanics. This mode of dependence, which is expressed in terms of co variances, is referred to as association in the probabilistic/statistical literature, and as FKG inequality in statistical mechanics.
Although the vast proportion of early statistical work was done in a
parametric framework, nonparametric methodology has permeated statistical
literature for
about half a century now. The main reason for its flourishing has been
inadequacy of parametric models to describe some situations in a satisfactory
manner.
This is particularly so in models built on dependence assumptions.
This presentation revolves around the problem of non-parametrically
estimating several quantities of statistical interest. It is also stipulated
that a specific kind
of dependence prevails each time. The entities estimated are distribution
and survival functions, probability density functions and their derivatives,
hazard rates, percentile functions, and regression functions. These quantities
are widely used in many subject areas, including actuarial science. Finally,
it is to be mentioned
that the preferred approach to these estimation problems is the one
based on the so-called kernel method.
Theofanis Sapatinas
[fanis1@hol.gr]
Title:
"An introduction to wavelet analysis and some statistical applications"
In recent years there has been a considerable development in the use
of wavelet methods in statistics. As a result, we are now at the stage
where it is reasonable to consider such methods to be another standard
tool of the applied statistician rather than a research novelty. With that
in mind, this talk is intended to give a relatively accessible introduction
to standard wavelet analysis and to provide some common uses of wavelet
methods in statistical applications.
Jef Teugels
[jef.teugels@wis.kuleuven.ac.be]
Title:
"Light and Heavy Claims in Insurance Mathematics"
We discuss three classical problems from insurance mathematics.
We assume that we are looking at an homogeneous portfolio where the
number of claims over the time period till t is denoted by N(t) while the
claim sizes are considered to form a sample from a distribution F. We assume
that claim times and claim sizes are independent. We will give an overview
of results dealing with the total claim amount, with ruin problems in infinite
and finite time and with reinsurance. We will pay special attention to
the differences that occur when either the distribution F has an exponentially
bounded tail or when it is sub-exponential. This first case treats the
situation where the claims are considered to be light-tailed while the
second covers instances where claims are heavy-tailed.
Howard Waters
[H.R.Waters@ma.hw.ac.uk]
Title:
"Life insurance underwriting, Coronary Heart Disease and genetic
testing"
In this talk I will describe a mathematical model for the development of Coronary Heart Disease (CHD) that incorporates all the information an underwriter has available at the time a life insurance policy is proposed. One of the motivations for producing this model is to assess the financial implications of the availability of genetic information relevant to CHD.
Yannis Yatracos
[yatracos@stat.nus.edu.sg]
Title:
"Clusters, Variance, Regression and Projection Pursuit (PP)"
A statistic that appears naturally in simple regression and in a decomposition
of the sample variance is used to define a projection pursuit index which
indicates data clustering, groups of remote cases in the factor space in
multiple regression, and different data structures.
The index is successfully applied in several examples.
A version of the statistic can also be used to group treatment means.
Alexandros Zimbidis
Title:
"Optimal premium control for a group of insurance companies"
The paper considers a group of insurance companies, which belong to the same owner (holding company). These companies operate independently throughout each financial year using a standard experience rating system to charge the respective total premium to their clients. Then, at the end of the year and according to the result of each company, the owner transfers from each company to the others a certain percentage of its reserve. We describe the problem above with a linear control system and using advanced optimization techniques, we calculate the optimal premium strategy for each company.