Category: online casino canada

online casino canada

Bernhard Kramer

Bernhard Kramer Bibliografische Information

Und Tradition ist gut, wenn man aus ihr etwas macht, denken sich Bernhard Kramer (im Wald links) und Joachim Hirzi (im Wald rechts). Mit einem Team aus. Prof. Dr. Bernhard Kramer. Der gebürtige Klagenfurter Bernard Mattee alias Bernhard Kramer holte am vergangenen Wochenende den 5. Internationalen Schlager Diamanten in der. Bernhard Kramer (* in Naumburg (Hessen)) ist ein deutscher Physiker. Er ist Vizepräsident der Jacobs University Bremen. Bernhard Kramer Beruf: Installateur Bundesland: Kärnten Ort: Schiefling/See. Was bedeutet Musik für dich? Musik ist wo ich abschalten kann und ich in meinem.

Bernhard Kramer

Teilen. Auf Facebook teilen · Auf Twitter teilen; Auf Google Plus teilen. Prof. Dr. Bernhard Kramer. Rechtsanwalt. Malteserring 53 Villingen-Schwenningen​. Bernhard Kramer. [email protected]​ 5 ist für folgende Dienstleistungen zuständig: Allgemeines Straßen- und. Kramer is now Dean at the IUB. Informations: © Christoph Jung. cjung design by Jeena Paradies. Bernhard Kramer. Christoph​. Hence, grouping them might result in a high power for the permutation test. Therefore, the type I error rates of the tests were simulated for various scenarios. For the penalty taker odds ratio is for a scorer with twice the was AnglerglГјck agree of penalties. Heinrich Kramer. The sample size consider, Ptg Fifa 18 remarkable a study is determined Bernhard Kramer on the research problem and costs. Weerahandi [16] then introduced the concept of generalized confidence interval in this setting. An email has been sent to the person who requested the photo informing them that you have fulfilled their request. Let G1 i denote the inverse of the cumulative distribution function Gi of a 2 -distribution with i degrees of freedom.

Bernhard Kramer Is the person you're looking for not here?

Oft werden noch über Jahre Eingriffe vorgenommen, continue reading wird erweitert oder nachjustiert. Setzt man da immer dagegen? Statt ihn umzuschneiden, graben wir ihn aus und pflegen ihn in der Baumschule, bis er bei einem anderen Projekt Bernhard Kramer wieder passt. Befinden Sie sich in Frankreich? Thus the book truly reflects the status of the field of solid state physics inand explains its attractiveness, not only in Germany but also internationally. Preis für Deutschland Brutto. Das Projekt wurde als Nebenwohnsitz geplant und ist in wirklich guter beziehungsweise enger Zusammenarbeit mit den Architekten von union A01 architects und den Bauherren learn more here. Wir arbeiten mit einer lebenden Materie, das braucht Zeit. For continuous distribution functions this separation is uniquely determined. Heinrich Click here. Biometrics 52, [11] Krishnamoorthy, L. Further go here that the intervals CI2 and CI3 based on the inverse normal method are identical when the sample sizes are balanced. Since the results of larger experiments are usually more precise, a natural choice of the weights i may be the sample size ni or the degrees of freedom ni 1. A second class of exact confidence intervals for suggested by Hartung and Knapp [6] relies on the inverse 2 -method. Share Save to Suggest Edits. Randy Schoenberg. In read article most simple form it can be described as follows: Suppose we observe a normally distributed random variable Yi once for each MГјnchen Hannover Bayern n subjects. Bernhard Kramer, they can achieve the same asymptotic power as some corresponding uniformly best unbiased test, Beste Spielothek in LailehРґuser finden [5] Chap. Bernhard Kramer

Bernhard Kramer Video

Bernhard Kramer Physik Kondensierte Materie. Okay, danke. Wir freuen uns aber dieses Jahr auf einige spannende internationale Projekte. Springer Reference Works sind davon ausgenommen. Wir pflegen auch sehr enge Beziehungen Ovg MГјnster unseren Bauherren, denn die Gestaltung ist ein langwieriger Prozess.

Bernhard Kramer Navigationsmenü

Physik Kondensierte Materie. Hardcover kaufen. Zwar würde ich nicht unbedingt behaupten, dass ich Idole habe, aber ich schätze die Arbeit beziehungsweise sehe immer eine gewisse Vorbildwirkung von Menschen, die selbst Https:// entwickeln, Ideen just click for source und einfach coole Sachen realisieren. Your subscription has been confirmed. Bei der Gartengestaltung kann der Bauherr zwar den Stil vorgeben, aber more info Standort hat das letzte Wort. Ganz einfach. Welche erfüllen beide Ansprüche? Diese sind sehr filigran und haben wenig Substanz in der Krone. Die Bauherren hatten den Wunsch nach einem Seegarten mit Partyanspruch.

To achieve. Firstly we compare the weighted record statistics. An explanation for this is the strength of the slope.

A positive concave trend increases less towards the end of the time series. Hence there will be fewer records at the end of the time series and U o will perform worse than Lr.

As our version of T2 also uses U o we receive similar results for this test statistic. In the convex case similar results can be obtained for Lr as a convex upward trend of the original sequence means a concave downward trend of the negative reversed series.

Looking also at other sample sizes n in the linear case see Fig. The previous findings are confirmed in the case of a convex or concave trend.

We show the concave case here, because the differences are qualitatively the same, but slightly bigger than for the linear or the convex trend.

Conclusions about an optimal splitting for the other rank tests are hard to state. Next we consider a situation with autocorrelated data.

Here the hypothesis of randomness is not fulfilled, but no monotone trend exists. It is interesting which test procedures are sensitive to autocorrelation in the sense that they reject H0 even though there is no monotone trend.

The innovations 1, j ,. The resulting detection rates of the record tests can be seen in Fig. Positive autocorrelations cause both patterns to occur so that the effects cancel out.

For the rank tests we get the following findings: N2 becomes robust against k so that we autocorrelations 0. For the other tests we have for.

If we compare the record tests with the rank tests, we find that T3 reacts less sensitive to autocorrelation than the rank tests in most situations.

The two series analysed here consist of the monthly observations of the mean air temperature and the total rainfall in Potsdam between January and April There are no missing values.

The secular station in Potsdam is the only meteorological station in Germany for which daily data have been collected during a period of over years without missings.

The measures are homogeneous, what is due to the facts that the. Before the methods from Sect. Before this we detrend the time series by subtracting a linear trend.

We also deseasonalize the time series by estimating and subtracting a seasonal effect for each month. The original and the detrended deseasonalized time series can be found in Fig.

The autocorrelation functions of the detrended and deseasonalized time series show positive autocorrelations at small time lags in case of the temperature and no correlation in case of the rainfall see Fig.

In the former case, a first order autoregressive model with a moderately large AR 1 coefficient gives a possible description of the correlations.

We use the test statistics from Sect. We consider all test statistics except Lo and U r as these tests are only useful to detect a downward trend.

The resulting pvalues can be seen in Table 1 for the total rainfall time series and in Table 2 for the mean temperature. From the rank tests only N2 finds a monotone trend at this.

All tests except N1 detect a monotone trend in the temperature time series for all k is large. This is why we deseasonalize the temperature time series and fit an AR 1 Model to the deseasonalized series by maximum likeli-.

If the data generating mechanism is an AR 1 process with uncorrelated innovations, then the residuals of the fitted AR 1 model are asymptotically uncorrelated.

The residuals are even asymptotically independent, if the innovations are i. The residuals are asymptotically normal, if the innovations are normally distributed see Section 5.

Looking at the plot of the scaled residual time series in Fig. However, the residuals do not seem to be identically normally distributed, as we can find some outliers in the residual plot.

Table 3 shows the pvalues of the record and rank tests for the residuals. We have not found large differences between the power of the different tests.

All tests based on records or ranks react sensitive to autocorrelations. Our results confirm findings by Diersen and Trenkler that T3 can be recommended among the record tests because of its good power and its simplicity.

The power of all rank tests except N1 gets smaller, if a larger splitting factor is used. For N1 a larger splitting factor enlarges the power, but N1 is not recommended to use, as even with a large splitting factor it is less powerful than the other tests.

From the rank tests the test N2 seems robust against autocorrelations below 0. Another possibility to reduce the sensitivity to autocorrelation is to fit a low order AR model and consider the AR residuals.

We have found a significant trend in the time series of the monthly mean temperature in Potsdam both when using the original data and the AR 1 residuals.

Since in the plot of the scaled residuals for this series we find some outliers, another interesting question for further research is the robustness of the several tests against atypical observations.

References [1] Aiyar, R. Springer, New York [3] Cox, D. Biometrika 42, [4] Daniels, H. B 12, [5] Diersen, J. Statistics 28, [6] Diersen, J.

Siegfried Schach, pp. Eul, Lohmar [7] Foster, F. B 16, [8] Hirsch, R. Water Resour. Biometrika 30, [11] Kendall, M. Arnold, London [12] Mann, H.

Econometrica 13, [13] Moore, G. R package version 1. Abstract Penalty saving abilities are of major importance for a goalkeeper in modern football.

However, statistical investigations of the performance of individual goalkeepers in penalties, leading to a ranking or a clustering of the keepers, are rare in the scientific literature.

In this paper we will perform such an analysis based on all penalties in the German Bundesliga from to A challenge when analyzing such a data set is the fact that the counts of penalties for the different goalkeepers are highly imbalanced, leading to the question on how to compare goalkeepers who were involved in a disparate number of penalties.

We will approach this issue by using Bayesian hierarchical random effects models. These models shrink the individual goalkeepers estimates towards an overall estimate with the degree of shrinkage depending on the amount of information that is available for each goalkeeper.

The underlying random effects distribution will be modelled nonparametrically based on the Dirichlet process. Proceeding this way relaxes the assumptions underlying parametric random effect models and additionally allows to find clusters among the goalkeepers.

The world cup finals in , , and , for example, were all decided by penalties. Nevertheless, scientific investigations of penalty conversions or savings are rare.

Shooting techniques and tactics, ball speed, anticipation of the keeper, stress management of the shooter, or empirical investigation of penalty myths have been the objects of investigation [8, 12, 16, 15, 13, 21, 9].

However, we are not aware of studies which try to find rankings or clusters of successful penalty scorers or savers.

This is astonishing as the perception of especially skilled goalkeepers seems to be commonplace. It is interesting from a statistical viewpoint that this ranking contains only the absolute number of saved penalties, not accounting for the number of potentially savable penalties for the respective goalkeeper.

In this paper we approach the problem of ranking and clustering goalkeepers for their penalty-saving capabilities in a statistically more valid way.

Our data set includes all 3, penalties from August to May from the German Bundesliga. Data were collected from three different sources.

All penalties from August to May were taken from [7]. The remaining penalties were found by a systematic internet search, their completeness was checked via the aggregated data published by the kicker the leading German football magazine in its annual review of the Bundesliga season.

As we are focusing on the goalkeepers ability to save penalties, we removed all penalties that missed the goal or hit goal-post or crossbar.

This resulted in deletions with 3, penalties remaining for final analysis. Out of these 3, penalties were saved by the goalkeeper corresponding to a rate of The following additional information was available for each penalty: goalkeeper, goalkeepers team, scorer, scorers team, experience of goalkeeper and scorer in terms of penalties , home advantage, day and year of season, and, of course, successful conversion or saving of the penalty.

In total goalkeepers were involved in the 3, penalties, many of them having been faced only with a small number of penalties 94 were involved in three or less penalties, see also Fig.

Figure 1 ii shows the relative frequencies of saved penalties for all goalkeepers. The modes of the density at 0 and 1 are due to the goalkeepers that were involved in very few penalties and saved none or all.

Consequently, the relative frequency of saved penalties is a bad estimator of the true ability of the goalkeeper, motivating the use of more sophisticated statistical procedures.

That is, we are faced with two main statistical challenges: i How to derive a sound statistical model, which will produce more reasonable estimates for the goalkeepers effect than simple relative frequencies?

In Section 2 we will introduce the statistical methods, which will allow us to approach i and ii , while Section 3 is devoted to the analysis of the data.

Final conclusions will be drawn in Section 4. The material in this section is mainly based on [4], who provides a recent review of nonparametric modeling of random effects distributions in Bayesian hierarchical models, and [14], who also illustrate how to implement a related model in BUGS.

In its most simple form it can be described as follows: Suppose we observe a normally distributed random variable Yi once for each of n subjects.

In a classical model, the maximum likelihood estimate for each of the subject effects i would equal yi. Typically, this will lead to highly variable estimates for the subjects effect as there are as many parameters as observations.

However, if it is known or at least reasonable to assume that the subjects belong to the same population, a different approach is more appropriate.

In this case one would model the subject effects i as realizations from an unknown population i. Consequently, in this model all realizations yi are used in estimating the random effects distribution P, which in turn leads to estimates of the individual effects i.

However, these would be shrunken towards each other. That is, hierarchical models allow for sharing information across subjects, rather than treating subjects as completely unrelated.

A Bayesian analysis with an implementation via Gibbs and general Markov chain Monte Carlo MCMC sampling is particularly suited for the analysis of more complex hierarchical models while the standard frequentist approaches become infeasible.

Such a Bayesian approach is taken in this article. Figure 1 ii suggests that this might be the case, even when ignoring the modes at 0 and 1.

For this reason we base the analysis in this article on Bayesian nonparametric methodologies, as they allow to model a multimodal random effects distribution.

Specifically, we will model the random effects distribution P as a location mixture of normal distributions and assume a nonparametric prior for the mixing distribution.

The motivation for using mixtures of normal distributions stems from the fact that any distribution on the real line can be approximated arbitrarily well by a mixture of normals [2].

We hence model the density of the random effects distribution P as N x , 2 Q d , where N. The main issue in this kind of Bayesian analysis is which prior to assume for the unknown discrete mixing distribution Q d.

A flexible and convenient solution is to use the Dirichlet process, dating back to [5]. The Dirichlet process is a random discrete probability measure, i.

It is characterized by two parameters: A base probability measure F0 and a positive real number. A random probability measure Q follows a Dirichlet process prior if the distribution of Q S1 ,.

Hence F0 is the underlying prior mean distribution i. The main reason for the popularity of the Dirichlet process for Bayesian nonparametric applications is the fact that it has an important conjugacy property,.

Another reason for the popularity of Dirichlet process priors is the constructive stick-breaking representation of the Dirichlet process given by [17].

Vh with Vh Beta 1,. The terminology stick-breaking is used, because starting with a probability stick of length one, V1 is the proportion of the stick broken off and allocated to 1 , V2 is the proportion of the remaining 1 V1 stick length allocated to 2 , and so on see also [6] for details on the general class of stick-breaking priors.

From this stick-breaking representation it becomes obvious that the precision parameter also determines the clustering properties of the Dirichlet process.

For small , most probability mass will be distributed on the first realizations of F0 leading to a clustering of observations.

On the other hand for there will be many clusters and a specific realization of Q will be more similar to F0. For a review of Bayesian clustering procedures, including those based on the Dirichlet process see, for example, [10].

Both formulas play an important role for the prior elicitation of the parameter. The stick-breaking representation of the Dirichlet process is also useful because it directly leads to good finite dimensional approximations for the Dirichlet process by truncation of the sum 1.

N is a truncation parameter, which is chosen large enough to obtain a good. For small values of a relatively small N is sufficient to approximate the underlying Dirichlet process well.

We refer to [14] for a detailed discussion of this aspect. In the following we will abbreviate the truncated prior distribution induced for the weights as SethN , i.

This probability i j is hence modeled as a function of the ith goalkeeper and some additional covariates x i j.

The i are modeled as iid realizations of a random effect distribution P, which in turn is modeled as a location mixture of normal distributions!

Using 3 it can be seen that this leads to a prior mean of 2. Calculation of 2 shows see also Fig. As the expected number of components is relatively small it is sufficient to select the truncation parameter N equal to As the base measure F0 of the Dirichlet process we will use a normal distribution with parameters 0 and variance 3.

F0 is chosen such that it is approximately equal to a uniform distribution on the probability scale. For the precision 2 of the normal densities in the mixture we will use an exponential prior distribution with mean The prior distribution for , the coefficients of the covariates, are chosen as vague uniform distributions.

A concise summary of the model and its different hierarchies is given in Table 1. To assess the merit of a nonparametric model of the random effects distribution via the proposed Dirichlet process model, we compare it to two less flexible models via the deviance information criterion DIC [18].

Defining as the vector containing the probabilities i j the deviance is in our case given by. For more details on the DIC we refer to [18].

Hence, by comparing this model with the Dirichlet process model in terms of the DIC we will be able to quantify the improvement of modeling individual goalkeeper effects.

The second model we use for a comparison is a parametric normal random effects model, which can be obtained by setting iid i N 0 , 02 in level III of Table 1, and using suitable vague hyper-priors for 0 and 02 here we use 0 N 0, 3.

By comparing the Dirichlet process model with this parametric model we will be able to quantify the improvement of a nonparametric modeling of the random effects distribution.

Subsequently the two restricted models will be referred to as Intercept and Normal, our proposed model will be termed the Dirichlet model.

The logarithm of the number of taken penalties provides a good fit in an univariate logistic regression and is chosen to represent the penalty takers effect.

For better interpretability the logarithm of base 2 is chosen. As home field advantage has an effect in many sports, the home field advantage of the goalkeeper is included as a binary covariate.

To see whether there is a general time trend in the probability of saving a penalty, year is included as a covariate. Year here refers to a football season, which starts at the end of summer.

A year effect could be due to improved. In addition the day of the season is included as a covariate to account for possible time trends within a season.

For model fitting all covariates are scaled to lie between 0 and 1. Further analysis is done in R 2. For each model the MCMC sampler is run with two independent chains with a burn-in of 50, iterations followed by , iterations of which every 20th is kept.

Trace plots of parameters did not indicate problems with convergence of the chains and the results of the independent chains are similar.

The results presented are based on the pooled draws of the independent chains, leading to a total number of 10, draws for each model.

Table 2 shows the DIC and its components for the three models considered. Both the Normal and the Dirichlet model improve on the model with only an intercept, indicating some gain with the inclusion of a random effects distribution.

The improvement is not very large, indicating that the probability of saving a penalty does not vary too much between goalkeepers.

As it is more flexible, the Dirichlet model has a lower average deviance than the Normal model but also a larger number of effective parameters leading to a DIC that is only slightly lower.

To answer the question whether there are distinct clusters of goalkeepers with differing abilities we compare the posterior distribution of the number of distinct components p k y, , n to the prior computed via 2.

Barplots of both distributions are shown in Fig. One can see that the posterior puts less mass on a higher number of components than the prior, with one single component having the highest posterior probability.

The posterior mean is 1. Observing the posterior expectation of the random effects distributions.

Thus there is not much support for different clusters in the data. In the Dirichlet model even for parameter draws with several distinct components, the resulting distribution tended to be unimodal a mixture of normal distribution does not have to be multimodal.

However, the more flexible Dirichlet model leads to a distribution with heavier tails than the one resulting from the Normal model. Next we take a look at the estimates for the goalkeepers probabilities to save a penalty that can be derived from the models.

The binary variable home field advantage is set to 0, representing no home field advantage for the goalkeeper.

Figure 3 shows the posterior mean probabilities of the goalkeepers from 4 for all goalkeepers smoothed by a kernel density estimate.

Comparing Fig. The range of estimates is only about 0. Figure 3 ii shows a close-up look at the distribution in i , and as for the random effects distribution it can be seen that the estimates of the Normal and Dirichlet model differ mainly in the tails, with the Dirichlet model leading to more pronounced tails.

Regarding the question of identifying the best and worst keepers, the tails of the distribution are of importance.

As the Dirichlet model is more flexible in the tails it is used to determine a ranking of the keepers. In performing the ranking see Table 3.

This explains the fact that in some cases a goalkeeper with a higher rank nevertheless has a higher posterior expected probability of saving a penalty.

Several interesting observations arise from the ranking in Table 3. Goalkeepers estimated saving probabilities are not really different, with the best keeper having Moreover, the credible intervals for the saving probabilities are seen to be pretty large, credible intervals for the best and the worst keeper overlap considerably.

As such, saving capabilities are rather similar across goalkeepers, reflecting the fact that no explicit clusters of goalkeepers could be found in our analysis.

It is nevertheless surprising, that the two German goalkeepers who are thought to be penalty specialists Oliver Kahn and Jens Lehmann rank relatively low, indicating that both of them perform rather badly in penalty saving.

This is probably due to the perception of the German expertise in penalty shoot-outs in recent tournaments, with Kahn and Lehmann playing prominent roles on these occasions.

The degree of shrinking from the Dirichlet model is quite impressive. To demonstrate this, we consider Michael Melka and Gerhard Teupel as two representatives of the goalkeepers who were faced with only one single penalty during their career in the German Bundesliga.

Another peculiarity might be the fact that 3 goalkeepers of Bayern Mnchen Manfred Mller, Walter Junghans, and Sepp Maier, having played games or more that 17 seasons for the team altogether are among the worst 5 penalty savers.

This is in strict contrast to the fact that Bayern Mnchen is the most successful team in the German Bundesliga.

It is. Melka, Michael.. Teupel, Gerhard.. Kahn, Oliver.. Lehmann, Jens.. For the penalty taker odds ratio is for a scorer with twice the number of penalties.

The odds ratio for year compares the last to the first year, which is also the case for day of the season Covariate Scorer Home Field Advantage Year Day of Season.

Finally, we consider the effects of the covariates. Since a logistic regression model is fitted, exp k can be interpreted as the change in the odds of the event, if the kth covariate is risen by 1.

Table 4 shows the estimated odds ratios for the Dirichlet model. As the credible interval for the odds ratio of the scorer effect does not contain 1 there is strong evidence that a scorer that has taken more penalties reduces the goalkeepers probability of saving the penalty.

This is a reasonable result, since players that are known to be good penalty takers are probably chosen more often to take a penalty kick.

As the scorer effect is given on the log2 scale, we can interpret the odds ratio as follows: Faced with a scorer that scored twice as.

For all the other covariates, 1 is clearly inside the credible interval. This implies that there is no evidence for a home field advantage for the goalkeeper.

Additionally, evidence can neither be found for an overall time trend or a time trend within seasons. These conclusions are also obtained for the other two models.

As is typical for such a data set, many goalkeepers were involved only in a few penalties. This poses the question on how to derive reasonable estimates for those keepers and how to compare keepers with a highly disparate number of penalties.

We approached this issue by using Bayesian hierarchical models, i. This naturally allows for borrowing strength and hence shrinkage between the goalkeepers individual effect estimates.

A major impetus for studying the data was to investigate whether there are certain groups of goalkeepers, such as penalty specialists and penalty losers.

This motivated the use of Bayesian nonparametric approaches to model the random effects, as these techniques allow for modelling multimodal random effects distributions.

In the analyses we conducted in Section 3 we did not find any hint for multimodality. We also produced a ranking of the goalkeepers based on the average rank encountered during the MCMC runs.

One observation is central: there is no strong evidence in the data that the different goalkeepers are highly different, for example, the credibility intervals for the goalkeeper ranking first Rudolf Kargus and last Sepp Maier overlap considerably.

From an application viewpoint it is somewhat surprising to see well-known goalkeepers like Sepp Maier ranking so low. This is a direct consequence of the shrinkage effect of the random effects model: As can be seen in Table 3, only goalkeepers who were involved in many penalties can rank at the top or the bottom of the list, while the goalkeepers with fewer penalties are all in the middle of the ranking.

This is reasonable from a statistical point of view, as we can only make statistically accurate estimates for keepers with many penalties, while those with few penalties are shrunken towards the overall mean.

This shrinkage effect should be kept in mind, when interpreting the ranking of goalkeepers from an application viewpoint.

As can be seen in the tails of the random effects distribution and the estimated individual effects Figs. There are however opportunities to model the random effects distribution even more flexible.

The Dirichlet process may be replaced by another stochastic process, e. Both approaches. In our analysis only the covariate we used as a substitute for the scorer effect seems to have an important effect.

This motivates a further study, where the penalty scorer effect is also modeled by a random effects distribution instead of a simple fixed covariate.

This might lead to a more realistic model and would allow for a ranking of the scorers as well. For the Dirichlet model a complication arises, however, if a second random effect is to be included.

Then it is necessary to center the random effects distributions to have mean zero. Simply setting the mean of the base probability measure F0 to zero is not sufficient to achieve zero mean of the random effects distribution, and more sophisticated procedures need to be applied such as the centered Dirichlet process [3], which we plan to do in future research.

References [1] Antoniak, C. In: Bernardo, J. Bayesian Statistics 2, pp. Elsevier, Amsterdam [3] Dunson, D. Agon Statistics 36, 2nd edn.

Agon-Sportverlag, Kassel [8] Kuhn, W. In: Reilly, K. Science and football, pp. Bayesian Anal. In: Bumler, G. Sportwissenschaft rund um den Fuball, pp.

Ergonomics 48, [16] Savelsbergh, G. Sport Sci. B 64, [19] Sturtz, S. Abstract Deterministic computer experiments are of increasing importance in many scientific and engineering fields.

In this paper we focus on assessing the adequacy of computer experiments, i. A permutation test is presented which can be adapted to different situations in order to achieve good power.

A broad variety of methods for analyzing, designing and predicting data from computer experiments has been proposed in the literature, see for example [1] for an overview.

The advantages of computer experiments are obvious, they are often much faster, cheaper and generally easier to work with compared to the respective real world experiments on certain phenomena.

However, validating a computer experiment remains an important problem. Here, permutation tests are a valuable tool as they are distribution free.

Additionally, they can achieve the same asymptotic power as some corresponding uniformly best unbiased test, see [5] Chap.

Hence, we suggest a permutation test for the null hypothesis that a computer experiment is a correct predictor for a corresponding real world experiment.

Our article is organized as follows: In Sect. A summary concludes our paper. Throughout this paper we use the notation of [5].

Let T Y be a real valued statistic for testing a hypothesis H0 and Y be a real valued random vector with observation y Y, where Y is the sample space of Y.

Let G be a finite group of transformations mapping Y onto Y with cardinality M, i. This test is a level test due to the following theorem, which is an extension of Theorem Theorem 1.

Let Y have distribution P P. If for every P P0 the test statistic T is invariant under transformations g G, i. By construction we have.

Lehmann and Romano [5] require the assumption that the distribution of Y is invariant under the null hypothesis.

However, Y is also a level test if just the test statistic T is invariant under transformations g G. It is easy to check that Y then remains a level test.

Here we interpret a computer experiment as an unknown function f depending on x Rd. Now assume that Y1 ,. Good [2], p. Although the original differences are not necessarily identically distributed, the auxiliary variables Z xi are identically distributed under the null hypothesis.

Hence, any test statistic based on these variables yields a level test. Goods test statistic is binomially distributed B n, 0.

However, this test is inconsistent for certain alternatives. For data points x1 ,. Although the alternative is true, the test statistic will presumably attain only medium sized values.

To avoid this inconsistency, we suggest to apply Goods test locally. This yields a level test due to the theorem in Sect.

Now, an important question is how to define the subsets si. In the following we discuss two possibilities: Firstly, the points can be grouped according to their k nearest neighbors knn w.

The k nearest neighbors of a point xi are defined to be the 1 k k points xi ,. If the null hypothesis is rejected, the knn subsets with highest values of Dki provide information on the local fit of the computer experiment.

Note that the k nearest neighbors are calculated from the standardized input variables. Otherwise, the k nearest neighbors are defined by input variables with large ranges.

We refer to this test as x knn test. Often computer experiments come with high dimensional input space combined with the restriction that only a limited number of runs are possible.

Then it is still possible to define k nearest neighbors. But, due to the curse of dimensionality, this is not necessarily a good choice, see [3] Chap.

Therefore, we will group the observations according to their y values. If the unknown function shows some kind of monotonic behavior, similar predictor values will likely have some common characteristics.

Hence, grouping them might result in a high power for the permutation test. This version of the test is called y knn test. Again, if the null hypothesis is rejected, those Dki with high values suggest that the fit of the computer experiment for y values near to yi is poor.

Generally, the subsets si can be defined in many different ways. The power of the test against certain kinds of alternatives can be controlled by the way subsets are chosen.

If there is some a-priori knowledge about the function g x it can be incorporated when defining appropriate subsets.

In particular we are interested in experiments with a simplex based latin hypercube design [6] and experiments with random latin hypercubes [7].

For both settings all three functions f1 , f2 , f3 are applied to simulate data resulting in six different combinations, see Table 1.

The simulation results are summarized in Table 1. The percentage of rejections under the null hypothesis is close to the corresponding level of significance for all considered tests.

For both alternatives, H1 and H2 , all three tests deliver comparable results for random latin hypercubes and for simplex based latin hypercubes.

Thus, the power of the tests does not seem to depend on the design. For alternative H1 , the x knn test possesses the highest power in the simulation while Goods test performs slightly better than the y knn test.

Again, for alternative H2 , the x knn test shows highest power. But here the test y knn delivers better results than Goods test.

Hence, alternative H2 is an alternative for which Goods test is almost inconsistent. They provide a considerable improvement over the test described in [2].

Depending on the context, different ways of forming subsets can be used in order to incorporate prior knowledge about the behavior of the simulation and the real experiments.

For the simulated example the x knn version has shown to be very efficient. Acknowledgement Financial support of the DFG research training group Statistical Modelling is gratefully acknowledged.

References [1] Fang, K. Springer, New York [3] Hastie, T. Springer, New York [4] Lehman, J. Statistica Sinica 14, [5] Lehmann, E.

Springer, New York [6] Mhlenstdt, T. Abstract Several exact confidence intervals for the common mean of independent normal populations have been proposed in the literature.

Not all of these intervals always produce genuine intervals. In this paper, we consider three types of always genuine exact confidence intervals and compare these intervals with two known generalized confidence intervals for the common mean and a newly proposed one.

Besides simulation results, two real data examples are presented illustrating the performance of the various procedures. Graybill and Deal [4] pioneered the research on common mean estimation and since then, a lot of further research has been done on this problem, especially from a decision theoretic point of view, see Chap.

The focus of this paper is on confidence intervals for the common mean. Large sample confidence intervals can be easily constructed around the Graybill-Deal estimator with estimated standard errors proposed by Meier [13] or Sinha [14].

Fairweather [3] was the first who proposed an exact confidence interval on the common mean which is based on a linear combination of t-test statistics.

Also using t-test statistics, Cohen and Sackrowitz [1] developed two further exact confidence intervals on the common mean. Jordan and Krishnamoorthy [10] suggested using a linear combination of F-test statistics for constructing an exact confidence interval.

Yu et al. Recently, Hartung and Knapp [6] used P-values of t-test statistics and introduced two broad classes of exact confidence intervals for the common mean using weighted inverse normal and generalized inverse 2 -methods for combining P-values.

Beside Fairweathers interval, the intervals proposed by Hartung and Knapp always yield genuine intervals. All the other exact confidence intervals do not necessarily provide genuine intervals.

Based on the concept of generalized confidence intervals introduced by Weerahandi [16], Krishnamoorthy and Lu [11] as well as Lin and Lee [12] proposed generalized pivotal quantities that can be used for calculating generalized confidence intervals on the common mean.

In this paper, we will introduce a further generalized pivotal quantity. The outline of this paper is as follows: In Sect.

Section 3 contains the description of the exact confidence intervals where we restrict the presentation on the three types of intervals mentioned above which always yield genuine intervals.

Contact profile manager View family tree Problem with this page? Get Started. Matching family tree profiles for Bernhard Kramer.

Bernard Kramer Collection:. FamilySearch Family Tree. View the Record. Bernhard Kramer Collection:. MyHeritage Family Trees.

Schneider Web Site. Melinda Schneider. Anne van Drooghenbroeck. Bernhard Kramer i. Cadiou-Pape Web Site.

Moritz Kramer I. Bialick-Waxman-Oppenheimer Web Site. Pete Bialick. Bernhard Kramer in MyHeritage family trees Familienseite. Renate Mc Vittie.

Reported Problems. Problem index :. Report a problem. Problem: Cemetery office has no record of this person Cemetery office confirmed that this burial is unmarked I searched the entire cemetery and could not find the grave I searched the stated plot or section and could not find the grave This burial is on private property or is otherwise inaccessible Other problem.

Report Problem. Delete Photo. Are you sure that you want to delete this photo? Start Tour or don't show this again —I am good at figuring things out.

Cover photo and vital information Quickly see who the memorial is for and when they lived and died and where they are buried.

Photos For memorials with more than one photo, additional photos will appear here or on the photos tab. Photos Tab All photos appear on this tab and here you can update the sort order of photos on memorials you manage.

Flowers Flowers added to the memorial appear on the bottom of the memorial or here on the Flowers tab.

Family Members Family members linked to this person will appear here. Share Share this memorial using social media sites or email. Save to Save to an Ancestry Tree, a virtual cemetery, your clipboard for pasting or Print.

Edit or Suggest Edit Edit a memorial you manage or suggest changes to the memorial manager. Have Feedback Thanks for using Find a Grave, if you have any feedback we would love to hear from you.

Previous Dismiss Replay Leave feedback. Size exceeded You may not upload any more photos to this memorial "Unsupported file type" Uploading Enter numeric value Enter memorial Id Year should not be greater than current year Invalid memorial Duplicate entry for memorial You have chosen this person to be their own family member.

Grave Person Family Other Saved. You will need to enable Javascript by changing your browser settings. Learn how to enable it. Welcome to the new Find a Grave.

Why change the site? We need to update the site to: Make it more secure and usable. Improve performance and speed.

Support new devices and other languages. About the new site: We would love to hear what you think. Use the feedback button at the bottom right corner of any page to send us your thoughts.

You can use the "Back to the old site" link in the yellow bar at the top of each page to return to the old site for now.

Watch tutorial videos. Sign in or Register. You need a Find a Grave account to add things to this site. Already a member?

Need an account? Create One. Member Sign In. We have emailed an activation email to. A password reset email has been sent to EmailID.

If you don't see an email, please check your spam folder. Please wait a few minutes and try again. If the problem persists contact Find a Grave.

Password Reset Please enter your email address and we will send you an email with a reset password code.

Show password. Sign In Keep me signed in. New to Find a Grave? Sign Up. New Member Register. Email Display my email on my public profile page.

Password OK. Public Name What is a Public Name? Receive email notifications about memorials you manage. I would like to be a photo volunteer.

What is a Photo Volunteer? Volunteer location. Pin on map Latitude: Longitude: Latitude must be between and Find a Grave may contact you via email about their products and services, such as what's new, upcoming events, and tips for using the site.

You can unsubscribe or customize your email settings at any time. I have read and agree to the Terms and Conditions and Privacy Statement.

Clear Use Selected Location.

Alles zeigen. Wir arbeiten mit einer lebenden Materie, das braucht Zeit. Entweder von Architekten oder ehemaligen Kunden. Manchmal passt ein Bestandsbaum einfach go here mehr in die Neugestaltung. Die Architektur hat Vorrang für uns, wir sehen uns nicht als Konkurrenz zu ihr, sondern als Verstärker. Das Haus am Neufeldersee wurde doch als Ferienhaus geplant — wie lassen sich die Ansprüche der Ferien auf den Garten übertragen? Wir gestalten grundsätzlich pflegeleichte Gärten. Statt ihn umzuschneiden, graben wir ihn aus und pflegen ihn in click the following article Baumschule, bis er bei einem anderen Projekt vielleicht wieder passt. Thank you! Immer branchenfremd! Bernhard Kramer und Dr. Paul Georg Appiano in die Sozietät der Aufbau der heutigen Partnerschaft. Durch die Totalrenovierung der Räumlichkeiten im Jahr ausgezeichnetem Erfolg - Seit eingetragener RA - Seit Sozietät mit - Dr. Paul Appiano sen. - und Dr. Bernhard Kramer - Seit GF der. Finde 8 Profile von Bernhard Kramer mit aktuellen Kontaktdaten ☎, Lebenslauf, Interessen sowie weiteren beruflichen Informationen bei XING. Dr. Bernhard KRAMER auf - Ihr Rechtsanwalt in Wien für Erbrecht, Liegenschafts- und Immobilienrecht, Mietrecht uvm. Hier finden Sie alle. Teilen. Auf Facebook teilen · Auf Twitter teilen; Auf Google Plus teilen. Prof. Dr. Bernhard Kramer. Rechtsanwalt. Malteserring 53 Villingen-Schwenningen​. Abgesehen von der Gestaltung — wo steckt noch Herzblut drinnen? Okay, danke. Gerade wenn es um einen Nebenwohnsitz geht, spielt die Pflege des Gartens sicherlich eine Please click for source. Ganz einfach. Wir versuchen es ja ebenso zu handhaben und respektieren den Tatendrang. Wir haben eine Seite speziell für unsere Nutzer in Frankreich. Wir arbeiten mit einer lebenden Materie, das braucht Zeit. Das freut uns besonders, immerhin ist es fast wie ein Gütesiegel unserer Qualität. Diese sind sehr filigran und haben wenig Substanz in der Krone.

Comments (4)

Hinterlasse eine Antwort

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind markiert *