Prof. Dr. Christine Hildegard Müller

Lehrstuhl für Statistik mit Anwendungen im Bereich der Ingenieurwissenschaften
TU Dortmund University

Contact
Author IDs

Hub
  • A-optimal designs for state estimation in networks
    Müller, C.H. and Schorning, K.
    Statistical Papers (2023)
    view abstract10.1007/s00362-023-01435-y
  • Inference of Intensity-Based Models for Load-Sharing Systems With Damage Accumulation
    Muller, C.H. and Meyer, R.
    IEEE Transactions on Reliability (2022)
    To model damage accumulation for load-sharing systems, two models given by intensity functions of self-exciting point processes are proposed: a model with additive damage accumulation and a model with multiplicative damage accumulation. Both models include the model without damage accumulation as a special case. For both models, the likelihood functions are derived and maximum likelihood estimators and likelihood ratio tests are given in a scale-invariant version and a scale-dependent version. Furthermore, a Bayesian approach using Markov chain Monte Carlo methods for posterior computation is provided. The frequentist and Bayesian methods are applied to a data set of failures of tension wires of concrete beams where a significant damage accumulation effect is confirmed by both additive and multiplicative damage accumulation models. This is all the more remarkable as a simulation study indicates that the tests for an existing damage accumulation effect are rather conservative. Moreover, prediction intervals for the failure times of the tension wires in a new experiment are given, which improve former prediction intervals derived without damage accumulation. The simulation study considers a scenario with a fixed time horizon and one with fixed numbers of failed components of the systems. IEEE
    view abstract10.1109/TR.2022.3140483
  • Sign depth tests in multiple regression
    Horn, M. and Müller, C.H.
    Journal of Statistical Computation and Simulation (2022)
    The recently proposed simple but powerful sign depth tests depend on the order of the residuals. While one-dimensional explanatory variables provide a natural order, there exists no canonical order for multidimensional explanatory variables. For this scenario, we present different approaches for ordering multidimensional explanatory variables and compare them regarding their performance with respect to the stability of the ordering, the usability for non-metric explanatory variables, the computational time complexity, and in the context of testing in linear models including high-dimensional multiple regression. It is shown that the sign depth tests based on orderings given by pairwise distances perform best. They are much more powerful than the classical sign test and also than the F-test when not having normally distributed errors. They are competitive to the much more complicated robust Wald test based on efficient MM-estimation. Additionally, the sign depth tests are more appropriate for outlier robust model checks. © 2022 Informa UK Limited, trading as Taylor & Francis Group.
    view abstract10.1080/00949655.2022.2130922
  • Simple powerful robust tests based on sign depth
    Leckey, K. and Malcherczyk, D. and Horn, M. and Müller, C.H.
    Statistical Papers (2022)
    Up to now, powerful outlier robust tests for linear models are based on M-estimators and are quite complicated. On the other hand, the simple robust classical sign test usually provides very bad power for certain alternatives. We present a generalization of the sign test which is similarly easy to comprehend but much more powerful. It is based on K-sign depth, shortly denoted by K-depth. These so-called K-depth tests are motivated by simplicial regression depth, but are not restricted to regression problems. They can be applied as soon as the true model leads to independent residuals with median equal to zero. Moreover, general hypotheses on the unknown parameter vector can be tested. While the 2-depth test, i.e. the K-depth test for K= 2 , is equivalent to the classical sign test, K-depth test with K≥ 3 turn out to be much more powerful in many applications. A drawback of the K-depth test is its fairly high computational effort when implemented naively. However, we show how this inherent computational complexity can be reduced. In order to see why K-depth tests with K≥ 3 are more powerful than the classical sign test, we discuss the asymptotic behavior of its test statistic for residual vectors with only few sign changes, which is in particular the case for some alternatives the classical sign test cannot reject. In contrast, we also consider residual vectors with alternating signs, representing models that fit the data very well. Finally, we demonstrate the good power of the K-depth tests for some examples including high-dimensional multiple regression. © 2022, The Author(s).
    view abstract10.1007/s00362-022-01337-5
  • Detection of fatigue induced fracture of prestressing steel by means of monitoring of experiment and structure [Detektieren ermüdungsbedingter spannstahlbrüche mittels rissmonitoring im versuch und am bauwerk]
    Heinrich, J. and Maurer, R. and Leckey, K. and Müller, C.H. and Ickstadt, K.
    Bauingenieur 96 (2021)
    On a prestressed concrete bridge in the region of the point of zero moment due to permanent load systematic crack formations with crack widths up to 0.5 mm were de-tected. These cracks were a consequence of the restraint mo-ments due to temperature (∆TM), which have not been considered at that time. As a result of the cracks, the tendons were endangered by fatigue. Even with more refined methods, it was not possible to verify sufficient fatigue resistance. In addi-tion, the overall structural condition was in a poor state. There-fore, the road authority decided to build a new bridge as repla-cement as soon as possible. Up to that point the structural safety of the bridge had to be ensured. For this purpose, conti-nuous crack monitoring was carried out for the critical areas. Fatigue tests on prestressed concrete components at TU Dortmund University formed the basis for its evaluation. These tests succeeded in reliably identifying individual wire breaks during the ongoing test and in developing a prognosis method for the remaining service life. The experiences and difficulties of transferring these methods to a real structure are reported. © 2021, VDI Fachmedien GmBbH & Co. All rights reserved.
    view abstract10.37544/0005-6650-2021-03-60
  • Experimental and statistical analysis of the wear of diamond impregnated tools
    Malevich, N. and Müller, C.H. and Dreier, J. and Kansteiner, M. and Biermann, D. and De Pinho Ferreira, M. and Tillmann, W.
    Wear 468-469 (2021)
    Diamond impregnated tools are considered which are used to machine concrete. During their application, the bonding as well as the diamonds need to wear down in a certain way to gain a sharp tool. This required wear is called self-sharpening and means a continuous exposure of new diamonds. Within the development phase of diamond tools, time and cost intensive testing is necessary for the assessment of the tool performance. Hence, an extrapolation based on a minimal amount of testing is desirable to forecast the tool lifetime. A further reduction of the development and testing cost can be achieved by reducing the data needed to forecast the tool performance. Within this paper, the development of a statistical model is shown which was used to forecast the lifetime of the single diamonds on the tool. The statistical analysis is based on single segment tests which were carried out with different segment specification. During the tests, the exposed and broken out diamonds were counted to serve as the necessary input data for the statistical analysis. The counting of the diamonds on the segment was done in two different ways: based on the 2-dimensional microscopic pictures made after every minute of drilling and based on the 3-dimensional surface measurements made after every 5 min of drilling. It turns out that these two approaches of the wear analysis provide similar results. © 2020 Elsevier B.V.
    view abstract10.1016/j.wear.2020.203574
  • K-sign depth: From asymptotics to efficient implementation
    Malcherczyk, D. and Leckey, K. and Müller, C.H.
    Journal of Statistical Planning and Inference 215 (2021)
    The K-sign depth (K-depth) of a model parameter θ in a data set is the relative number of K-tuples among its residual vector that have alternating signs. The K-depth test based on K-depth, recently proposed by Leckey et al. (2020), is equivalent to the classical residual-based sign test for K=2, but is much more powerful for K≥3. This test has two major drawbacks. First, the computation of the K-depth is fairly time consuming having a polynomial time complexity of degree K, and second, the test requires knowledge about the quantiles of the test statistic which previously had to be obtained by simulation for each sample size individually. We tackle both of these drawbacks by presenting a limit theorem for the distribution of the test statistic and deriving an (asymptotically equivalent) form of the K-depth which can be computed efficiently. For K=3, such a limit theorem was already derived in Kustosz et al. (2016a) by mimicking the proof for U-statistics. We provide here a much shorter proof based on Donsker's theorem and extend it to any K≥3. As part of the proof, we derive an asymptotically equivalent form of the K-depth which can be computed in linear time. This alternative and the original implementation of the K-depth are compared with respect to their runtimes and absolute difference. © 2021 Elsevier B.V.
    view abstract10.1016/j.jspi.2021.04.006
  • Detection of circlelike overlapping objects in thermal spray images
    Kirchhoff, D. and Kuhnt, S. and Bloch, L. and Müller, C.H.
    Quality and Reliability Engineering International 36 (2020)
    In this paper, we present a new algorithm for the detection of distorted and overlapping circlelike objects in noisy grayscale images. Its main step is an edge detection using rotated difference kernel estimators. To the resulting estimated edge points, circles are fitted in an iterative manner using a circular clustering algorithm. A new measure of similarity can assess the performance of algorithms for the detection of circlelike objects, even if the number of detected circles does not coincide with the number of true circles. We apply the algorithm to scanning electron microscope images of a high-velocity oxygen fuel (HVOF) spray process, which is a popular coating technique. There, a metal powder is fed into a jet, gets accelerated and heated up by means of a mixture of oxygen and fuel, and finally deposits as coating upon a substrate. If the process is stopped before a continuous layer is formed, the molten metal powder solidifies in form of small, almost circular so-called splats, which vary with regard to their shape, size, and structure and can overlap each other. As these properties are challenging for existing image processing algorithms, engineers analyze splat images manually up to now. We further compare our new algorithm with a baseline approach that uses the Laplacian of Gaussian blob detection. It turns out that our algorithm performs better on a set of test images of round, spattered, and overlapping circles. © 2020 The Authors. Quality and Reliability Engineering International published by John Wiley & Sons Ltd
    view abstract10.1002/qre.2689
  • Prediction intervals for load-sharing systems in accelerated life testing
    Leckey, K. and Müller, C.H. and Szugat, S. and Maurer, R.
    Quality and Reliability Engineering International 36 (2020)
    Based on accelerated lifetime experiments, we consider the problem of constructing prediction intervals for the time point at which a given number of components of a load-sharing system fails. Our research is motivated by lab experiments with prestressed concrete beams where the tension wires fail successively. Due to an audible noise when breaking, the time points of failure could be determined exactly by acoustic measurements. Under the assumption of equal load sharing between the tension wires, we present a model for the failure times based on a birth process. We provide a model check based on a Q-Q plot including a simulated simultaneous confidence band and four simulation-free prediction methods. Three of the prediction methods are given by confidence sets where two of them are based on classical tests and the third is based on a new outlier-robust test using sign depth. The fourth method uses the implicit function theorem and the δ-method to get prediction intervals without confidence sets for the unknown parameter. We compare these methods by a leave-one-out analysis of the data on prestressed concrete beams. Moreover, a simulation study is performed to discuss advantages and drawbacks of the individual methods. © 2020 John Wiley & Sons Ltd.
    view abstract10.1002/qre.2664
  • Detection of Anomalous Sequences in Crack Data of a Bridge Monitoring
    Abbas, S. and Fried, R. and Heinrich, J. and Horn, M. and Jakubzik, M. and Kohlenbach, J. and Maurer, R. and Michels, A. and Müller, C.H.
    Studies in Classification, Data Analysis, and Knowledge Organization (2019)
    For estimating the remaining lifetime of old prestressed concrete bridges, a monitoring of crack widths can be used. However, the time series of crack widths show a strong variation mainly caused by temperature and traffic. Additionally, sequences with extreme volatility appear where the cause is unknown. They are called anomalous sequences in the following. We present and compare four methods which were developed in a pilot study and aim to detect these anomalous sequences in the time series. Volatilities caused by traffic should not be detected. © Springer Nature Switzerland AG 2019.
    view abstract10.1007/978-3-030-25147-5_16
  • Optimal design of inspection times for interval censoring
    Malevich, N. and Müller, C.H.
    Statistical Papers 60 (2019)
    We treat optimal equidistant and optimal non-equidistant inspection times for interval censoring of exponential distributions. We provide in particular a new approach for determining the optimal non-equidistant inspection times. The resulting recursive formula is related to a formula for optimal spacing of quantiles for asymptotically best linear estimates based on order statistics and to a formula for optimal cutpoints by the discretisation of continuous random variables. Moreover, we show that by the censoring with the optimal non-equidistant inspection times as well as with optimal equidistant inspection times, there is no loss of information if the number of inspections is converging to infinity. Since optimal equidistant inspection times are easier to calculate and easier to handle in practice, we study the efficiency of optimal equidistant inspection times with respect to optimal non-equidistant inspection times. Moreover, since the optimal inspection times are only locally optimal, we also provide some results concerning maximin efficient designs. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.
    view abstract10.1007/s00362-018-01067-7
  • Statistical Analysis of the Lifetime of Diamond-Impregnated Tools for Core Drilling of Concrete
    Malevich, N. and Müller, C.H. and Kansteiner, M. and Biermann, D. and Ferreira, M. and Tillmann, W.
    Studies in Classification, Data Analysis, and Knowledge Organization (2019)
    The lifetime of diamond-impregnated tools for core drilling of concrete is studied via the lifetimes of the single diamonds on the tool. Thereby, the number of visible and active diamonds on the tool surface is determined by microscopical inspections of the tool at given points in time. This leads to interval-censored lifetime data if only the diamonds visible at the beginning are considered. If also the lifetimes of diamonds appearing during the drilling process are included, then the lifetimes are doubly interval-censored. We use a well-known maximum likelihood method to analyze the interval-censored data and derive a new extension of it for the analysis of the doubly interval-censored data. The methods are applied to three series of experiments which differ in the size of the diamonds and the type of concrete. It turns out that the lifetimes of small diamonds used for drilling into conventional concrete are much shorter than the lifetimes when using large diamonds or high-strength concrete. © Springer Nature Switzerland AG 2019.
    view abstract10.1007/978-3-030-25147-5_15
  • Bayesian prediction for a jump diffusion process - With application to crack growth in fatigue experiments
    Hermann, S. and Ickstadt, K. and Müller, C.H.
    Reliability Engineering and System Safety (2018)
    In many fields of technological developments, understanding and controlling material fatigue is an important point of interest. This article is concerned with statistical modeling of the damage process of prestressed concrete under low cyclic load. A crack width process is observed which exhibits jumps with increasing frequency. Firstly, these jumps are modeled using a Poisson process where two intensity functions are presented and compared. Secondly, based on the modeled jump process, a stochastic process for the crack width is considered through a stochastic differential equation (SDE). It turns out that this SDE has an explicit solution. For both modeling steps, a Bayesian estimation and prediction procedure is presented. © 2016 Elsevier Ltd.
    view abstract10.1016/j.ress.2016.08.012
  • Conditions for high publication rates of countries in high-ranking international statistics journals
    Szugat, S. and Bakhtin, I. and Fechtel, L. and Hüsch, M. and Riehl, J. and Tegethoff, C. and Müller, C.H.
    AStA Wirtschafts- und Sozialstatistisches Archiv (2017)
    The professional handling of data in the times of “big data” becomes more and more an important factor for competitive markets. This article compares the frequency of statistical publications in an international scope which can be seen as an indicator for the importance of data sciences in the respective countries. The analysis is based on 3657 articles in 16 leading statistical journals from the years 2010 to 2016. It is shown that Germany performs at best mediocre in this field compared to other highly developed countries. For explanation, several explanatory variables as the economic power and the general developmental state of a country were analyzed. Thereby a significant influence was particularly shown for the prevalence of statistics as autonomous topic in education and research – here represented by the frequency of universities with a statistics department. This influence could be also shown for two of four journal clusters if the journals are clustered according to the departments of the authors. Only for econometric and psychometric journals, the frequency of universities with a statistic department is not significant. © 2017 Springer-Verlag Berlin Heidelberg
    view abstract10.1007/s11943-017-0201-0
  • Investigation of the performance of trimmed estimators of life time distributions with censoring
    Clarke, B.R. and Höller, A. and Müller, C.H. and Wamahiu, K.
    Australian and New Zealand Journal of Statistics 59 (2017)
    For the lifetime (or negative) exponential distribution, the trimmed likelihood estimator has been shown to be explicit in the form of a β-trimmed mean which is representable as an estimating functional that is both weakly continuous and Fréchet differentiable and hence qualitatively robust at the parametric model. It also has high efficiency at the model. The robustness is in contrast to the maximum likelihood estimator (MLE) involving the usual mean which is not robust to contamination in the upper tail of the distribution. When there is known right censoring, it may be perceived that the MLE which is the most asymptotically efficient estimator may be protected from the effects of ‘outliers’ due to censoring. We demonstrate that this is not the case generally, and in fact, based on the functional form of the estimators, suggest a hybrid defined estimator that incorporates the best features of both the MLE and the β-trimmed mean. Additionally, we study the pure trimmed likelihood estimator for censored data and show that it can be easily calculated and that the censored observations are not always trimmed. The different trimmed estimators are compared by a modest simulation study. © 2017 Australian Statistical Publishing Association Inc. Published by John Wiley & Sons Australia Pty Ltd.
    view abstract10.1111/anzs.12219
  • Resistance of prestressed concrete structures to fatigue in domain of endurance limit
    Heinrich, J. and Maurer, R. and Hermann, S. and Ickstadt, K. and Müller, C.
    High Tech Concrete: Where Technology and Engineering Meet - Proceedings of the 2017 fib Symposium (2017)
    Due to the disproportionately increase of traffic over the last decades, the number of load cycles from heavy trucks imposed on bridges has grown as well. In particular existing older bridges were generally not designed for current and future traffic load. Therefore, in many cases, deficiencies concerning the ultimate resistance, the resistance to fatigue and the serviceability limit states have been identified. As a result of the high increase of traffic, the number of load cycles up to 10E8 might be reached over the lifetime of a bridge. The S-N-curves were empirically determined under constant amplitude loadings according to recent valid standards. However, in the past the conducted experiments on fatigue behavior of post-tensioned prestressing steel in steel ducts were limited to 2x10E7 load cycles. In the course of various research projects at the TU Dortmund University, eleven large scale tests on prestressed concrete beams have been conducted in total. One of the test girders reached a maximum of 108,250,000 number of load cycles with a stress range of 50 MPa. The ensuring result of the test series was, that even with a low stress range (50 MPa), it was not possible to determine the endurance limit. These findings bring into question the assumption for slope coefficient (k2 = 7) of second branch according to current standards. Beside the experiments under laboratory conditions, investigations via monitoring on an existing bridge with a deficient fatigue strength has been started as well. The long-term objective is the installation of sensors on bridges for the observation of the fatigue behavior. If damage accumulation results in a critical state, prompt initiation of necessary measures can be conducted. © Springer International Publishing AG 2018.
    view abstract10.1007/978-3-319-59471-2_205
  • Bayesian prediction of crack growth based on a hierarchical diffusion model
    Hermann, S. and Ickstadt, K. and Müller, C.H.
    Applied Stochastic Models in Business and Industry 32 (2016)
    A general Bayesian approach for stochastic versions of deterministic growth models is presented to provide predictions for crack propagation in an early stage of the growth process. To improve the prediction, the information of other crack growth processes is used in a hierarchical (mixed-effects) model. Two stochastic versions of a deterministic growth model are compared. One is a nonlinear regression setup where the trajectory is assumed to be the solution of an ordinary differential equation with additive errors. The other is a diffusion model defined by a stochastic differential equation where increments have additive errors. While Bayesian prediction is known for hierarchical models based on nonlinear regression, we propose a new Bayesian prediction method for hierarchical diffusion models. Six growth models for each of the two approaches are compared with respect to their ability to predict the crack propagation in a large data example. Surprisingly, the stochastic differential equation approach has no advantage concerning the prediction compared with the nonlinear regression setup, although the diffusion model seems more appropriate for crack growth. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
    view abstract10.1002/asmb.2175
  • Prediction Intervals for the Failure Time of Prestressed Concrete Beams
    Szugat, S. and Heinrich, J. and Maurer, R. and Müller, C.H.
    Advances in Materials Science and Engineering 2016 (2016)
    The aim is the prediction of the failure time of prestressed concrete beams under low cyclic load. Since the experiments last long for low load, accelerated failure tests with higher load are conducted. However, the accelerated tests are expensive so that only few tests are available. To obtain a more precise failure time prediction, the additional information of time points of breakage of tension wires is used. These breakage time points are modeled by a nonlinear birth process. This allows not only point prediction of a critical number of broken tension wires but also prediction intervals which express the uncertainty of the prediction. © 2016 Sebastian Szugat et al.
    view abstract10.1155/2016/9605450
  • Simplified simplicial depth for regression and autoregressive growth processes
    Kustosz, C.P. and Müller, C.H. and Wendler, M.
    Journal of Statistical Planning and Inference 173 (2016)
    We simplify simplicial depth in two directions for regression and autoregressive growth processes. At first we show that simplicial tangent depth often reduces to counting the subsets with alternating signs of the residuals if the regressors are ordered. The second simplification is given by not regarding all subsets of residuals. By consideration of only special subsets of residuals, the asymptotic distributions of the simplified simplicial depth notions are normal distributions so that tests and confidence intervals can be derived easily. We propose two simplifications for the general case and a third simplification for the special case where two parameters are unknown. Additionally, we derive conditions for the consistency of the tests. We show that the simplified depth notions can be used for polynomial regression, for several nonlinear regression models, and for several autoregressive growth processes. We compare the efficiency and robustness of the different simplified versions by a simulation study concerning the Michaelis-Menten model and a nonlinear autoregressive process of order one and provide an application on crack growth. © 2016 Elsevier B.V.
    view abstract10.1016/j.jspi.2016.01.005
  • Tests Based on Simplicial Depth for AR(1) Models With Explosion
    Kustosz, C. P. and Leucht, A. and Müller, C. H.
    Journal of Time Series Analysis 37 (2016)
    We propose outlier a robust and distribution-free test for the explosive AR(1) model with intercept based on simplicial depth. In this model, simplicial depth reduces to counting the cases where three residuals have alternating signs. The asymptotic distribution of the test statistic is given by a specific Gaussian process. Conditions for the consistency are given, and the power of the test at finite samples is compared with alternative tests. The new test outperforms these tests in the case of skewed errors and outliers. Finally, we apply the method to crack growth data and compare the results with an OLS approach.
    view abstract10.1111/jtsa.12186
  • Trimmed likelihood estimators for lifetime experiments and their influence functions
    Müller, C.H. and Szugat, S. and Celik, N. and Clarke, B.R.
    Statistics 50 (2016)
    We study the behaviour of trimmed likelihood estimators (TLEs) for lifetime models with exponential or lognormal distributions possessing a linear or nonlinear link function. In particular, we investigate the difference between two possible definitions for the TLE, one called original trimmed likelihood estimator (OTLE) and one called modified trimmed likelihood estimator (MTLE) which is the finite sample version of a form for location and linear regression used by Bednarski and Clarke [Trimmed likelihood estimation of location and scale of the normal distribution. Aust J Statist. 1993;35:141–153, Asymptotics for an adaptive trimmed likelihood location estimator. Statistics. 2002;36:1–8] and Bednarski et al. [Adaptive trimmed likelihood estimation in regression. Discuss Math Probab Stat. 2010;30:203–219]. The OTLE is always an MTLE but the MTLE may not be unique even in cases where the OLTE is unique. We compare especially the functional forms of both types of estimators, characterize the difference with the implicit function theorem and indicate situations where they coincide and where they do not coincide. Since the functional form of the MTLE has a simpler form, we use it then for deriving the influence function, again with the help of the implicit function theorem. The derivation of the influence function for the functional form of the OTLE is similar but more complicated. © 2015 Taylor & Francis.
    view abstract10.1080/02331888.2015.1104313
  • Analysis of crack growth with robust, distribution-free estimators and tests for non-stationary autoregressive processes
    Kustosz, C.P. and Müller, C.H.
    Statistical Papers 55 (2014)
    This article investigates the application of depth estimators to crack growth models in construction engineering. Many crack growth models are based on the Paris-Erdogan equation which describes crack growth by a deterministic differential equation. By introducing a stochastic error term, crack growth can be modeled by a non-stationary autoregressive process with Lévy-type errors. A regression depth approach is presented to estimate the drift parameter of the process. We then prove the consistency of the estimator under quite general assumptions on the error distribution. By an extension of the depth notion to simplical depth it is possible to use a degenerated U-statistic and to establish tests for general hypotheses about the drift parameter. Since the statistic asymptotically has a transformed χ1 2 distribution, simple confidence intervals for the drift parameter can be obtained. In the second part, simulations of AR(1) processes with different error distributions are used to examine the quality of the constructed test. Finally we apply the presented method to crack growth experiments. We compare two datasets from independent experiments under different conditions but with the same material. We show that the parameter estimates differ significantly in this case. © 2012 Springer-Verlag Berlin Heidelberg.
    view abstract10.1007/s00362-012-0479-5
  • Consistency of the likelihood depth estimator for the correlation coefficient
    Denecke, L. and Müller, C.H.
    Statistical Papers 55 (2014)
    Denecke and Müller (CSDA 55:2724-2738, 2011) presented an estimator for the correlation coefficient based on likelihood depth for Gaussian copula and Denecke and Müller (J Stat Planning Inference 142: 2501-2517, 2012) proved a theorem about the consistency of general estimators based on data depth using uniform convergence of the depth measure. In this article, the uniform convergence of the depth measure for correlation is shown so that consistency of the correlation estimator based on depth can be concluded. The uniform convergence is shown with the help of the extension of the Glivenko-Cantelli Lemma by Vapnik- C̃ ervonenkis classes. © 2012 Springer-Verlag Berlin Heidelberg.
    view abstract10.1007/s00362-012-0490-x
  • New robust tests for the parameters of the Weibull distribution for complete and censored data
    Denecke, L. and Müller, C.H.
    Metrika 77 (2014)
    Using the likelihood depth, new consistent and robust tests for the parameters of the Weibull distribution are developed. Uncensored as well as type-I right-censored data are considered. Tests are given for the shape parameter and also the scale parameter of the Weibull distribution, where in each case the situation that the other parameter is known as well the situation that both parameter are unknown is examined. In simulation studies the behavior in finite sample size and in contaminated data is analyzed and the new method is compared to existing ones. Here it is shown that the new tests based on likelihood depth give quite good results compared to standard methods and are robust against contamination. They are also robust in right-censored data in contrast to existing methods like the method of medians. © 2013 Springer-Verlag Berlin Heidelberg.
    view abstract10.1007/s00184-013-0454-8
  • Upper and lower bounds for breakdown points
    Müller, C.H.
    Robustness and Complex Data Structures: Festschrift in Honour of Ursula Gather (2013)
    General upper and lower bounds for the finite sample breakdown point are presented. The general upper bound is obtained by an approach of Davies and Gather using algebraic groups of transformations. It is shown that the upper bound for the finite sample breakdown point has a simpler form than for the population breakdown point. This result is applied to multivariate regression. It is shown that the upper bounds of the breakdown points of estimators of regression parameters, location and scatter can be obtained with the same group of transformations. The general lower bound for the breakdown point of some estimators is given via the concept of d-fullness introduced by Vandev. This provides that the lower bound and the upper bound can coincide for least trimmed squares estimators for multivariate regression and simultaneous estimation of scale and regression parameter. © Springer-Verlag Berlin Heidelberg 2013.
    view abstract10.1007/978-3-642-35494-6_5
  • Consistency and robustness of tests and estimators based on depth
    Denecke, L. and Müller, C.H.
    Journal of Statistical Planning and Inference 142 (2012)
    In this paper it is shown that data depth does not only provide consistent and robust estimators but also consistent and robust tests. Thereby, consistency of a test means that the Type I (α) error and the Type II (Β) error converge to zero with growing sample size in the interior of the nullhypothesis and the alternative, respectively. Robustness is measured by the breakdown point which depends here on a so-called concentration parameter. The consistency and robustness properties are shown for cases where the parameter of maximum depth is a biased estimator and has to be corrected. This bias is a disadvantage for estimation but an advantage for testing. It causes that the corresponding simplicial depth is not a degenerated U-statistic so that tests can be derived easily. However, the straightforward tests have a very poor power although they are asymptotic α-level tests. To improve the power, a new method is presented to modify these tests so that even consistency of the modified tests is achieved. Examples of two-dimensional copulas and the Weibull distribution show the applicability of the new method. © 2012 Elsevier B.V.
    view abstract10.1016/j.jspi.2012.03.024
  • Micro crack detection with Dijkstra's shortest path algorithm
    Gunkel, C. and Stepper, A. and Müller, A.C. and Müller, C.H.
    Machine Vision and Applications 23 (2012)
    A package based on the free software R is pre-sented which allows the automatic detection of micro cracks and corresponding statistical analysis of crack quantities. It uses a shortest path algorithm to detect micro cracks in situations where the cracks are surrounded by plastic defor-mations and where a discrimination between cracks and plastic deformations is difficult. In a first step, crack clusters are detected as connected components of pixels with values below a given threshold value. Then the crack paths are determined by Dijkstra's algorithm as longest shortest paths through the darkest parts of the crack clusters. Linear parts of kinked paths can be identified with this. The new method was applied to over 2,000 images. Some statistical applications and a comparison with another free image tool are given. © Springer-Verlag 2012.
    view abstract10.1007/s00138-011-0324-1
  • Consistent estimation of species abundance from a presence-absence map
    Müller, C.H. and Huggins, R. and Hwang, W.-H.
    Statistics and Probability Letters 81 (2011)
    The estimation of the abundance of a species using the presence or absence of the species over a grid of cells simplifies data collection but the resulting statistical analysis is challenging. Several estimators have been proposed but their properties are unknown. Here we consider a generalized gamma-Poisson model which allows dependencies across the grid and develop a new estimator for this model. It is shown that this estimator is consistent, allowing us to conclude that it is indeed possible to estimate abundance from presence-absence maps. © 2011 Elsevier B.V.
    view abstract10.1016/j.spl.2011.04.005
  • Data depth for simple orthogonal regression with application to crack orientation
    Müller, C.H.
    Metrika 74 (2011)
    This paper studies tangential and simplicial data depth for simple orthogonal regression. Given N points in the plane, simple orthogonal regression means that we wish to determine the line through the origin that has smallest distance to the points measured in the direction orthogonal to the line. For both depth notions, it is proved that two lines which are orthogonal to each other, i. e. two lines forming a cross, have the same depth. Depth-based orthogonal regression can thus merely fit crosses, not lines. We investigated the robustness properties of maximum depth estimators using the notion of exact fit. Another topic the paper covers is the testing of the hypothesis that the data points form a cross-like pattern. After a simple transformation, such a test can be based on the biggest data depth. The paper discusses an application of this test for the investigation of stress fractures in materials. © 2009 Springer-Verlag.
    view abstract10.1007/s00184-009-0294-8
  • Robust estimators and tests for bivariate copulas based on likelihood depth
    Denecke, L. and Müller, C.H.
    Computational Statistics and Data Analysis 55 (2011)
    Estimators and tests based on likelihood depth for one-parametric copulas are given. For the Gaussian and Gumbel copulas, it is shown that the maximum depth estimators are biased. They can be corrected and the new estimators are robust against contamination. For testing, simplicial likelihood depth is considered. Because of the bias of the maximum depth estimator, simplicial likelihood depth is not a degenerated U-statistic so that easily asymptotic α-level tests can be derived for arbitrary hypotheses. Tests are in particular investigated for the one-sided alternatives. Simulation studies for the Gaussian and Gumbel copulas show that the power of the first test is rather good, but the latter one has to be improved, which is also done here. The new tests are robust against contamination. © 2011 Elsevier B.V. All rights reserved.
    view abstract10.1016/j.csda.2011.04.005
  • Statistical analysis of damage evolution with a new image tool
    Müller, C.H. and Gunkel, C. and Denecke, L.
    Fatigue and Fracture of Engineering Materials and Structures 34 (2011)
    The surface damage evolution under stress is often analysed by images of long-distance microscopes. Usually hundreds of images are obtained during the fatigue process. To analyse this huge number of images automatically, a new image tool is presented. This new image tool is included in free statistic software so that a statistical analysis of the damage evolution is easily possible. In particular several specific damage parameters can be calculated during the fatigue process. Some of these specific damage parameters are compared statistically here with simple damage parameters using images of two specimens under different stress levels at different time points of the fatigue process. It is shown that the specific damage parameters discriminate between the two different damage evolutions in an earlier stage than the simple parameters. They are also less influenced by different brightness and scales of the images and show other desirable properties of a damage parameter. © 2011 Blackwell Publishing Ltd.
    view abstract10.1111/j.1460-2695.2010.01543.x
  • Depth notions for orthogonal regression
    Wellmann, R. and Müller, C.H.
    Journal of Multivariate Analysis 101 (2010)
    Global depth, tangent depth and simplicial depths for classical and orthogonal regression are compared in examples, and properties that are useful for calculations are derived. The robustness of the maximum simplicial depth estimates is shown in examples. Algorithms for the calculation of depths for orthogonal regression are proposed, and tests for multiple regression are transferred to orthogonal regression. These tests are distribution free in the case of bivariate observations. For a particular test problem, the powers of tests that are based on simplicial depth and tangent depth are compared by simulations. © 2010 Elsevier Inc.
    view abstract10.1016/j.jmva.2010.06.008
  • Tests for multiple regression based on simplicial depth
    Wellmann, R. and Müller, C.H.
    Journal of Multivariate Analysis 101 (2010)
    A general approach for developing distribution free tests for general linear models based on simplicial depth is applied to multiple regression. The tests are based on the asymptotic distribution of the simplicial regression depth, which depends only on the distribution law of the vector product of regressor variables. Based on this formula, the spectral decomposition and thus the asymptotic distribution is derived for multiple regression through the origin and multiple regression with Cauchy distributed explanatory variables. The errors may be heteroscedastic and the concrete form of the error distribution does not need to be known. Moreover, the asymptotic distribution for multiple regression with intercept does not depend on the location and scale of the explanatory variables. A simulation study suggests that the tests can be applied also to normal distributed explanatory variables. An application on multiple regression for shape analysis of fishes demonstrates the applicability of the new tests and in particular their outlier robustness. © 2010 Elsevier Inc. All rights reserved.
    view abstract10.1016/j.jmva.2009.12.008
  • cluster and image analysis

  • consistency

  • damage

  • robust statistics

  • technometrics

« back