### Scientific Output

Over 10.000 scientific papers have been published by members of the Materials Chain since the foundation of the University Alliance Ruhr in 2010. This tremendous output is proof of the excellent environment the Ruhr Area provides for research in the field of materials science and technology.

Below, you can either scroll through the complete list of our annually published material, search for a specific author or term via the **free text search**, or use the **interactive keyword cloud** to get to know our research strengths. You can also review the publication record of every Materials Chain member via his or her personal member’s page.

**Interactive keyword cloud:**

adsorption aluminum anisotropy atomic force microscopy atoms calculations carbon carbon dioxide catalysis catalysts chemistry coatings computer simulation copper crystal structure deformation density functional theory deposition diffusion elasticity electrodes electrons finite element method gold grain boundaries high resolution transmission electron microscopy hydrogen liquids manganese mass spectrometry mechanical properties metabolism metal nanoparticles metals microstructure molecular dynamics nanocrystals nanoparticles nickel optimization oxidation oxygen particle size plasticity polymers probes scanning electron microscopy silicon silver single crystals substrates surface properties synthesis (chemical) temperature thermodynamics thin films titanium transmission electron microscopy x ray diffraction x ray photoelectron spectroscopy

- or -

**Free text search:**

2020 • 179 **Dictionary learning in Fourier-transform scanning tunneling spectroscopy**

Cheung, S.C. and Shin, J.Y. and Lau, Y. and Chen, Z. and Sun, J. and Zhang, Y. and Müller, M.A. and Eremin, I.M. and Wright, J.N. and Pasupathy, A.N.*Nature Communications*11 (2020)Modern high-resolution microscopes are commonly used to study specimens that have dense and aperiodic spatial structure. Extracting meaningful information from images obtained from such microscopes remains a formidable challenge. Fourier analysis is commonly used to analyze the structure of such images. However, the Fourier transform fundamentally suffers from severe phase noise when applied to aperiodic images. Here, we report the development of an algorithm based on nonconvex optimization that directly uncovers the fundamental motifs present in a real-space image. Apart from being quantitatively superior to traditional Fourier analysis, we show that this algorithm also uncovers phase sensitive information about the underlying motif structure. We demonstrate its usefulness by studying scanning tunneling microscopy images of a Co-doped iron arsenide superconductor and prove that the application of the algorithm allows for the complete recovery of quasiparticle interference in this material. © 2020, The Author(s).view abstract doi: 10.1038/s41467-020-14633-1 2020 • 178 **Multilevel surrogate modeling approach for optimization problems with polymorphic uncertain parameters**

Freitag, S. and Edler, P. and Kremer, K. and Meschke, G.*International Journal of Approximate Reasoning*119 81-91 (2020)The solution of optimization problems with polymorphic uncertain data requires combining stochastic and non-stochastic approaches. The concept of uncertain a priori parameters and uncertain design parameters quantified by random variables and intervals is presented in this paper. Multiple runs of the nonlinear finite element model solving the structural mechanics with varying a priori and design parameters are needed to obtain a solution by means of iterative optimization algorithms (e.g. particle swarm optimization). The combination of interval analysis and Monte Carlo simulation is required for each design to be optimized. This can only be realized by substituting the nonlinear finite element model by numerically efficient surrogate models. In this paper, a multilevel strategy for neural network based surrogate modeling is presented. The deterministic finite element simulation, the stochastic analysis as well as the interval analysis are approximated by sequentially trained artificial neural networks. The approach is verified and applied to optimize the concrete cover of a reinforced concrete structure, taking the variability of material parameters and the structural load as well as construction imprecision into account. © 2019 Elsevier Inc.view abstract doi: 10.1016/j.ijar.2019.12.015 2020 • 177 **Development and Implementation of Statistical Methods for Quality Optimization in the Large-Format Lithium-Ion Cells Production**

Meyer, O. and Weihs, C. and Mähr, S. and Tran, H.-Y. and Kirchhof, M. and Schnackenberg, S. and Neuhaus-Stern, J. and Rößler, S. and Braunwarth, W.*Energy Technology*8 (2020)Herein, two techniques to optimize the production process of large-format lithium-ion cells for plug-in hybrid electric vehicles using data-driven methods are introduced and demonstrated. The first approach uses standard settings of the quality influencing factors to maximize the number of produced electrode sheets that meet predefined quality specifications. The second approach uses statistical methods to determine the levels of the quality influencing factors of a certain process that optimizes all quality parameters of the corresponding product jointly. © 2019 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimview abstract doi: 10.1002/ente.201900244 2020 • 176 **Transferability of process parameters in laser powder bed fusion processes for an energy and cost efficient manufacturing**

Pannitz, O. and Sehrt, J.T.*Sustainability (Switzerland)*12 1-14 (2020)In the past decade, the sales of metal additive manufacturing systems have increased intensely. In particular, PBF-LB/M systems (powder bed fusion of metals using a laser-based system) represent a technology of great industrial interest, in which metallic powders are molten and solidified layer upon layer by a focused laser beam. This leads to a simultaneous increase in demand for metallic powder materials. Due to adjusted process parameters of PBF-LB/M systems, the powder is usually procured by the system's manufacturer. The requirement and freedom to process different feedstocks in a reproducible quality and the economic and ecological factors involved are reasons to have a closer look at the differences between the quality of the provided metallic powders. Besides, different feedstock materials require different energy inputs, allowing a sustainable process control to be established. In this work, powder quality of stainless steel 1.4404 and the effects during the processing of metallic powders that are nominally the same were analyzed and the influence on the build process followed by the final part quality was investigated. Thus, a correlation between morphology, particle size distribution, absorptivity, flowability, and densification depending on process parameters was demonstrated. Optimized exposure parameters to ensure a more sustainable and energy and cost-efficient manufacturing process were determined. © 2020 by the authors.view abstract doi: 10.3390/su12041565 2020 • 175 **Model-Based Analysis of the Photocatalytic HCl Oxidation Kinetics over TiO2**

Rath, T. and Bloh, J.Z. and Lüken, A. and Ollegott, K. and Muhler, M.*Industrial and Engineering Chemistry Research*59 4265-4272 (2020)The kinetic modeling of photocatalytic reactions is a powerful tool for process optimization. We applied a holistic kinetic model for the gas-phase photocatalytic oxidation of HCl to Cl2 to identify suitable operation conditions and further optimization potential. We used a flat-plate photoreactor with UV LEDs and iodometric titration as online analytics and performed a comprehensive parameter variation. High O2 and moderate HCl partial pressures resulted in the highest reaction rates, indicating a favorable reactant ratio of 4:1. An Arrhenius dependence of the reaction rate with an apparent activation energy of 25.7 kJ mol-1 identifies a suitable reaction temperature of ∼120 °C. This temperature combines high reaction rates with high apparent quantum yields up to 8.4%, showing a logarithmic dependence of reaction rates on light intensity. The well-fitting kinetic model predicts that improving the intrinsic activity of the photocatalyst is the key for further enhancing the efficiency of photocatalytic HCl recycling. Copyright © 2020 American Chemical Society.view abstract doi: 10.1021/acs.iecr.9b05820 2020 • 174 **Robust optimization scheme for inverse method for crystal plasticity model parametrization**

Shahmardani, M. and Vajragupta, N. and Hartmaier, A.*Materials*13 (2020)A bottom-up material modeling based on a nonlocal crystal plasticity model requires information of a large set of physical and phenomenological parameters. Because of the many material parameters, it is inherently difficult to determine the nonlocal crystal plasticity parameters. Therefore, a robust method is proposed to parameterize the nonlocal crystal plasticity model of a body-centered cubic (BCC) material by combining a nanoindentation test and inverse analysis. Nanoindentation tests returned the load-displacement curve and surface imprint of the considered sample. The inverse analysis is developed based on trust-region-reflective algorithm, which is the most robust optimization algorithm for the considered non-convex problem. The discrepancy function is defined to minimize both the load-displacement curves and the surface topologies of the considered material under applying varied indentation forces obtained from numerical models and experimental output. The numerical model results based on the identified material properties show good agreement with the experimental output. Finally, a sensitivity analysis performed changing the nonlocal crystal plasticity parameters in a predefined range emphasized that the geometrical factor has the most significant influence on the load-displacement curve and surface imprint parameters. © 2020 by the authors.view abstract doi: 10.3390/ma13030735 2020 • 173 **A combined experimental and modelling approach for the Weimberg pathway optimisation**

Shen, L. and Kohlhaas, M. and Enoki, J. and Meier, R. and Schönenberger, B. and Wohlgemuth, R. and Kourist, R. and Niemeyer, F. and van Niekerk, D. and Bräsen, C. and Niemeyer, J. and Snoep, J. and Siebers, B.*Nature Communications*11 (2020)The oxidative Weimberg pathway for the five-step pentose degradation to α-ketoglutarate is a key route for sustainable bioconversion of lignocellulosic biomass to added-value products and biofuels. The oxidative pathway from Caulobacter crescentus has been employed in in-vivo metabolic engineering with intact cells and in in-vitro enzyme cascades. The performance of such engineering approaches is often hampered by systems complexity, caused by non-linear kinetics and allosteric regulatory mechanisms. Here we report an iterative approach to construct and validate a quantitative model for the Weimberg pathway. Two sensitive points in pathway performance have been identified as follows: (1) product inhibition of the dehydrogenases (particularly in the absence of an efficient NAD+ recycling mechanism) and (2) balancing the activities of the dehydratases. The resulting model is utilized to design enzyme cascades for optimized conversion and to analyse pathway performance in C. cresensus cell-free extracts. © 2020, The Author(s).view abstract doi: 10.1038/s41467-020-14830-y 2019 • 172 **Maximizing Information Extraction of Extended Radar Targets Through MIMO Beamforming**

Ahmed, A.M. and Alameer, A. and Erni, D. and Sezgin, A.*IEEE Geoscience and Remote Sensing Letters*16 539-543 (2019)We jointly design an information-theoretic transmit and receive radar beamformers for spatially near multiple extended targets. We maximize the mutual information (MI) between the received signals and the targets signatures that allows the extraction of the unknown features, which may include shape, dimensions, and material. However, high interference caused by spatially near targets might obstruct the information extraction, and directing the beamformers toward the steering vector as done in conventional beamformers does not solve this problem, especially for extended targets. In this letter, an iterative algorithm is presented to solve this problem using alternative minimization, dividing it into two blocks. The first block is solving for the transmit beamformers successively using block coordinate descent, and the second one is solving for the receiver beamformers using the minimum variance distortionless response. We also show the effect of using our beamformers on the waveform design problem. Numerical results indicate that this algorithm can achieve substantially higher MI than the existing conventional methods. Thus, except for some degenerate cases, having fixed beamformers instead of optimized ones lead to significant performance degradation. © 2004-2012 IEEE.view abstract doi: 10.1109/LGRS.2018.2876714 2019 • 171 **Compression–expansion processes for chemical energy storage: Thermodynamic optimization for methane, ethane and hydrogen**

Atakan, B.*Energies*12 (2019)Several methods for chemical energy storage have been discussed recently in the context of fluctuating energy sources, such as wind and solar energy conversion. Here a compression–expansion process, as also used in piston engines or compressors, is investigated to evaluate its potential for the conversion of mechanical energy to chemical energy, or more correctly, exergy. A thermodynamically limiting adiabatic compression–chemical equilibration–expansion cycle is modeled and optimized for the amount of stored energy with realistic parameter bounds of initial temperature, pressure, compression ratio and composition. As an example of the method, initial mixture compositions of methane, ethane, hydrogen and argon are optimized and the results discussed. In addition to the stored exergy, the main products (acetylene, benzene, and hydrogen) and exergetic losses of this thermodynamically limiting cycle are also analyzed, and the volumetric and specific work are discussed as objective functions. It was found that the optimal mixtures are binary methane argon mixtures with high argon content. The predicted exergy losses due to chemical equilibration are generally below 10%, and the chemical exergy of the initial mixture can be increased or chemically up-converted due to the work input by approximately 11% in such a thermodynamically limiting process, which appears promising. © 2019 by the author.view abstract doi: 10.3390/en12173332 2019 • 170 **Light enough or go lighter?**

Hahn, M. and Gies, S. and Tekkaya, A.E.*Materials and Design*163 (2019)A novel concept for evaluating the lightweight design of structural components, named the true lightweight degree, is developed. It is found that traditional lightweight parameters just constitute a lower bound for stiffness-oriented designs. This can be attributed to the low underlying design freedom. However, this is often not suitable anymore as today's manufacturing processes and materials typically allow for an increased design freedom. With a simplified analytical model, it is shown that a combination of a requirement-based equivalent strain, the specific Young's modulus, and the yield strength gives an upper bound. Numerical topology optimizations prove that this theoretical upper bound can serve as a good qualitative criterion for the mass-minimizing material choice. The investigations reveal that the right material choice can be quite sensitive to the degree of design freedom. For example, under lower-bound design constraints, an aluminum alloy having a yield strength of 280 MPa enables a slightly lighter component mass than a steel alloy with a yield strength of 800 MPa. In contrast, the steel alloy yields a considerably lower mass if full geometric design freedom is assumed. Yet, the concept derived within this work is only valid for elastically loaded, stiffness-oriented components made of isotropic material. © 2018 The Authorsview abstract doi: 10.1016/j.matdes.2018.107545 2019 • 169 **Optimization with constraints considering polymorphic uncertainties**

Mäck, M. and Caylak, I. and Edler, P. and Freitag, S. and Hanss, M. and Mahnken, R. and Meschke, G. and Penner, E.*GAMM Mitteilungen*42 (2019)In this contribution, a numerical design strategy for the optimization under polymorphic uncertainty is introduced and applied to a self-weight minimization of a framework structure. The polymorphic uncertainty, which affects the constraint function of the optimization problem, is thereby modeled in terms of stochastic variables, fuzzy sets, and intervals to account for variability, imprecision and insufficient information. The stochastic quantities are computed using polynomial chaos expansion resulting in a purely fuzzy-valued formulation of the constraint functions which can be computed using α-cut optimization. Afterward, the constraint function can be interpreted in a possibilistic manner, resulting in a flexible formulation to include expert knowledge and to achieve a robust design. © 2019 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimview abstract doi: 10.1002/gamm.201900005 2019 • 168 **Robust segmental lining design – Potentials of advanced numerical simulations for the design of TBM driven tunnels**

Meschke, G. and Neu, G.E. and Marwan, A.*Geomechanik und Tunnelbau*12 484-490 (2019)Loading assumptions used for the structural design of segmental linings often improperly reflect the complex load combinations that develop during the construction of a bored tunnel. Therefore segment designs used in practice tend to be on the safe side and often rely on conventional reinforcement methods instead of including other reinforcement concepts, such as steel fibres. In this contribution, a multi-scale computational modelling framework is proposed to investigate the response of steel-fibre reinforced, traditionally reinforced, and hybrid-reinforced lining segments to radial loadings with an emphasis on the longitudinal joints. This modelling approach offers an opportunity to directly investigate the influence of type and content of steel fibres on the performance of segmented linings at the structural scale. Using this framework, a method for robust optimization is applied in order to generate damage-tolerant hybrid segment designs. © 2019 Ernst & Sohn Verlag für Architektur und technische Wissenschaften GmbH & Co. KG, Berlinview abstract doi: 10.1002/geot.201900032 2019 • 167 **Optimal remediation design and simulation of groundwater flow coupled to contaminant transport using genetic algorithm and radial point collocation method (RPCM)**

Seyedpour, S.M. and Kirmizakis, P. and Brennan, P. and Doherty, R. and Ricken, T.*Science of the Total Environment*669 389-399 (2019)The simulation-optimisation models of groundwater and contaminant transport can be a powerful tool in the management of groundwater resources and remediation design. In this study, using Multiquadratic Radial Basis Function (MRBF) a coupled groundwater flow and reactive transport of contaminant and oxidant was developed in the framework of the Meshfree method. The parameter analysis has determined the optimum shape parameter (0.97), and the results of the model were compared with a physical sandbox model which were in good agreement. The genetic algorithm approach was used to find the optimum design of the remediation using permanganate as an oxidant. To find the optimum design we considered two objectives and two constraints. The results revealed that the breakthrough of contaminant to the downstream area of interest and the concentration of the contaminant in this area is reduced significantly with optimisation. © 2019view abstract doi: 10.1016/j.scitotenv.2019.01.409 2019 • 166 **Investigation on cutting edge preparation and FEM assisted optimization of the cutting edge micro shape for machining of nickel-base alloy**

Tiffe, M. and Aßmuth, R. and Saelzer, J. and Biermann, D.*Production Engineering*13 459-467 (2019)The productivity and the tool life of cutting tools are majorly influenced by the cutting edge micro shape. The identification of optimized cutting edges is usually based on empirical knowledge or is carried out in iterative investigation steps. This paper presents an approach to predict optimal cutting edge micro shapes with the aid of finite-element-simulations of the chip formation. The approach is investigated for the machining of the nickel-base alloy Inconel 718. The cutting edges are prepared by pressurized air wet abrasive jet machining. Utilizing this method, the prepared cutting edges have a certain profile, which is considered for the modelling. By applying a model for tool wear the influence of the cutting edge micro shape on the tool life span is estimated. Subsequently, a statistical modelling provides the prediction of the tool wear rate for any possible parameter set within the investigated range. This is used to find an optimized cutting edge profile that minimizes the tool wear. An experimental investigation concludes the optimization procedure. © 2019, German Academic Society for Production Engineering (WGP).view abstract doi: 10.1007/s11740-019-00900-8 2019 • 165 **Comparison of residence time models for pharmaceutical twin-screw-extrusion processes**

Wesholowski, J. and Podhaisky, H. and Thommes, M.*Powder Technology*341 85-93 (2019)Twin-Screw-Extrusion is an emerging and focused method with several applications in the pharmaceutical field. With respect to the desired process conditions, three different types of extrusion can be utilized such as Hot-Melt-, Wet- or Cold-Extrusion. For all of them the residence time and the residence time distribution are crucial process parameters determining the duration of thermal and mechanical stress to the processed material. Several approaches describing the residence time of extrusion processes are known and the most commonly applied models (Axial-Dispersion-, Tanks-in-Series- and Two-Compartment-Model) and functions (Zusatz-Function) were investigated. Therefore, experimental data representing different process conditions was applied from literature. The residence time distribution models were implemented and the least squares method was used to obtain the characteristic model parameters. The numerically calculated results were compared and evaluated based on the deviations to the experimental data overall, in crucial sections of the residence time plots and the comprehensibility of the model variables with respect to the interpretation for a process optimization. Moreover, the correlations between the characteristic parameters to those parameters of the different other tested models as well as their physical meaning have been revealed. Based on the results, a new model explicitly for twin-screw-extrusion was developed. This Twin-Dispersion-Model respects two independent mixing processes. Regarding to the parameters' comprehensibility and insight in process condition changes as well as deviations of the model fit to the experimental data, it was superior to all other tested models. © 2018 Elsevier B.V.view abstract doi: 10.1016/j.powtec.2018.02.054 2019 • 164 **Optical Measurement Method of Particle Suspension in Stirred Vessels**

Wolinski, S. and Ulbricht, M. and Schultz, H.J.*Chemie-Ingenieur-Technik*91 1326-1332 (2019)Suspending particles in liquids is an important and versatile case for industrial stirring processes. By using advanced optical, non-invasive measurement techniques like particle image velocimetry (PIV), it is possible to gain deep insights into the involved fluid dynamics without affecting the flow. However, for suspensions, the application of PIV is not trivial since both, suspended and tracer particles are present and need to be discerned during experiments. The here presented method development solves this problem and thus leads to a better insight into turbulent kinetic energy distribution, which can be utilized for process optimization through improved stirred vessel design. © 2019 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimview abstract doi: 10.1002/cite.201800099 2019 • 163 **Femtosecond x-ray diffraction reveals a liquid–liquid phase transition in phase-change materials**

Zalden, P. and Quirin, F. and Schumacher, M. and Siegel, J. and Wei, S. and Koc, A. and Nicoul, M. and Trigo, M. and Andreasson, P. and Enquist, H. and Shu, M.J. and Pardini, T. and Chollet, M. and Zhu, D. and Lemke, H. and Ronneb...*Science*364 1062-1067 (2019)In phase-change memory devices, a material is cycled between glassy and crystalline states. The highly temperature-dependent kinetics of its crystallization process enables application in memory technology, but the transition has not been resolved on an atomic scale. Using femtosecond x-ray diffraction and ab initio computer simulations, we determined the time-dependent pair-correlation function of phase-change materials throughout the melt-quenching and crystallization process. We found a liquid–liquid phase transition in the phase-change materials Ag4In3Sb67Te26 and Ge15Sb85 at 660 and 610 kelvin, respectively. The transition is predominantly caused by the onset of Peierls distortions, the amplitude of which correlates with an increase of the apparent activation energy of diffusivity. This reveals a relationship between atomic structure and kinetics, enabling a systematic optimization of the memory-switching kinetics. 2017 © The Authors, some rights reservedview abstract doi: 10.1126/science.aaw1773 2018 • 162 **Comparison of thermodynamic topology optimization with SIMP**

Jantos, D.R. and Riedel, C. and Hackl, K. and Junker, P.*Continuum Mechanics and Thermodynamics*(2018)Computationally efficient approaches to topology optimization usually include heuristic update and/or filtering schemes to overcome numerical problems such as the well-known checkerboarding phenomenon, local minima, and the associated mesh dependency. In a series of papers, Hamilton’s principle, which originates from thermodynamic material modeling, was applied to derive a model for topology optimization based on a novel conceptual structure: utilization of this thermodynamic approach resulted in an evolution equation for the local mass distribution as the update scheme during the iterative optimization process. Although this resulted in topologies comparable to those from classical optimization schemes, no direct linkage between these different approaches has yet been drawn. In this contribution, we present a detailed comparison of the new approach to the well-established SIMP approach. To this end, minor modifications of the original thermodynamic approach yield an optimization process with a numerical efficiency that is comparable to that of SIMP approaches. However, a great advantage of the new approach arises from results that are parameter- and mesh-independent, although neither filtering techniques nor gradient constraints are applied. Several 2D and 3D examples are discussed and serve as a profound basis for an extensive comparison, which also helps to reveal similarities and differences between the individual approaches. © 2018, Springer-Verlag GmbH Germany, part of Springer Nature.view abstract doi: 10.1007/s00161-018-0706-y 2018 • 161 **Robust nonfragile observer-based H2/H∞ controller**

Oveisi, A. and Nestorović, T.*JVC/Journal of Vibration and Control*24 722-738 (2018)A robust nonfragile observer-based controller for a linear time-invariant system with structured uncertainty is introduced. The H∞ robust stability of the closed-loop system is guaranteed by use of the Lyapunov theorem in the presence of undesirable disturbance. For the sake of addressing the fragility problem, independent sets of time-dependent gain-uncertainties are assumed to be existing for the controller and the observer elements. In order to satisfy the arbitrary H2-normed constraints for the control system and to enable automatic determination of the optimal H∞ bound of the performance functions in disturbance rejection control, additional necessary and sufficient conditions are presented in a linear matrix equality/inequality framework. The H∞ observer-based controller is then transformed into an optimization problem of coupled set of linear matrix equalities/inequality that can be solved iteratively by use of numerical software such as Scilab. Finally, concerning the evaluation of the performance of the controller, the control system is implemented in real time on a mechanical system, aiming at vibration suppression. The plant under study is a multi-input single-output clamped-free piezo-laminated smart beam. The nominal mathematical reduced-order model of the beam with piezo-actuators is used to design the proposed controller and then the control system is implemented experimentally on the full-order real-time system. The results show that the closed-loop system has a robust performance in rejecting the disturbance in the presence of the structured uncertainty and in the presence of the unmodeled dynamics. © 2016, © The Author(s) 2016.view abstract doi: 10.1177/1077546316651548 2018 • 160 **Observer-based repetitive model predictive control in active vibration suppression**

Oveisi, A. and Hosseini-Pishrobat, M. and Nestorović, T. and Keighobadi, J.*Structural Control and Health Monitoring*25 (2018)In this paper, an observer-based feedback/feedforward model predictive control (MPC) algorithm is developed for addressing the active vibration control (AVC) of lightly damped structures. For this purpose, the feedback control design process is formulated in the framework of disturbance rejection control (DRC) and a repetitive MPC is adapted to guarantee the robust asymptotic stability of the closed-loop system. To this end, a recursive least squares (RLS) algorithm is engaged for online estimation of the disturbance signal, and the estimated disturbance is feed-forwarded through the control channels. The mismatched disturbance is considered as a broadband energy bounded unknown signal independent of the control input, and the internal model principle is adjusted to account for its governing dynamics. For the sake of relieving the computational burden of online optimization in MPC scheme, within the broad prediction horizons, a set of orthonormal Laguerre functions is utilized. The controller design is carried out on a reduced-order model of the experimental system in the nominal frequency range of operation. Accordingly, the system model is constructed following the frequency-domain subspace system identification method. Comprehensive experimental analyses in both time-/frequency-domain are performed to investigate the robustness of the AVC system regarding the unmodeled dynamics, parametric uncertainties, and external noises. Additionally, the spillover effect of the actuation authorities and saturation of the active elements as two common issues of AVC systems are addressed. Copyright © 2018 John Wiley & Sons, Ltd.view abstract doi: 10.1002/stc.2149 2018 • 159 **Adaptive optimal control of Signorini’s problem**

Rademacher, A. and Rosin, K.*Computational Optimization and Applications*70 531-569 (2018)In this article, we present a-posteriori error estimations in context of optimal control of contact problems; in particular of Signorini’s problem. Due to the contact side-condition, the solution operator of the underlying variational inequality is not differentiable, yet we want to apply Newton’s method. Therefore, the non-smooth problem is regularized by penalization and afterwards discretized by finite elements. We derive optimality systems for the regularized formulation in the continuous as well as in the discrete case. This is done explicitly for Signorini’s contact problem, which covers linear elasticity and linearized surface contact conditions. The latter creates the need for treating trace-operations carefully, especially in contrast to obstacle contact conditions, which exert in the domain. Based on the dual weighted residual method and these optimality systems, we deduce error representations for the regularization, discretization and numerical errors. Those representations are further developed into error estimators. The resulting error estimator for regularization error is defined only in the contact area. Therefore its computational cost is especially low for Signorini’s contact problem. Finally, we utilize the estimators in an adaptive refinement strategy balancing regularization and discretization errors. Numerical results substantiate the theoretical findings. We present different examples concerning Signorini’s problem in two and three dimensions. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.view abstract doi: 10.1007/s10589-018-9982-5 2018 • 158 **Artificial Noise-Based Physical-Layer Security in Interference Alignment Multipair Two-Way Relaying Networks**

Tubail, D. and El-Absi, M. and Ikki, S.S. and Mesbah, W. and Kaiser, T.*IEEE Access*6 19073-19085 (2018)This paper introduces two novel physical-layer security algorithms for interference alignment (IA)-based multipair communication systems with a single half-duplex relay and a single eavesdropper. According to these proposed physical-layer security algorithms, users mix their information signals with jamming signals, and broadcast them at the multiple access phase, while the relay forwards the mixed signals at the broadcast phase. Moreover, the relay and users' precoding and decoding matrices are designed in a way which enables the legitimate receivers to eliminate the jamming signals while the hidden eavesdropper is unable to eliminate these jamming streams. In this context, the proposed algorithms are designed to transmit the information streams with minimum power, preserving the user received signal to noise ratio above a pre-determined threshold and utilizing the remaining power for the jamming signals. Therefore, the user and relay power budgets allocation is formulated as a joint optimization problem that can be solved using an iterative optimization algorithm and semi-definite programming. In such fashion, four transmission models are proposed to manage the artificial noise transmission among the different users to achieve a tradeoff between the users sum-rate and secrecy rate. Extensive simulation results are provided to show the efficiency of the proposed algorithms and the transmission models in achieving the transmission security for IA-based multiuser relaying networks. © 2013 IEEE.view abstract doi: 10.1109/ACCESS.2018.2817264 2017 • 157 **L1 penalization of volumetric dose objectives in optimal control of PDEs**

Barnard, R.C. and Clason, C.*Computational Optimization and Applications*67 401-419 (2017)This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on (Formula presented.) penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parameter tends to infinity, and present a semismooth Newton method for their efficient numerical solution. The performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints. © 2017 Springer Science+Business Media New York (outside the USA)view abstract doi: 10.1007/s10589-017-9897-6 2017 • 156 **Efficient variational constitutive updates for Allen–Cahn-type phase field theory coupled to continuum mechanics**

Bartels, A. and Mosler, J.*Computer Methods in Applied Mechanics and Engineering*317 55-83 (2017)This paper deals with efficient variational constitutive updates for Allen–Cahn-type phase field theory coupled to a geometrically exact description of continuum mechanics. The starting point of the implementation is a unified variational principle: A time-continuous potential is introduced, the minimizers of which describe naturally every aspect of the aforementioned coupled model—including the homogenization assumptions defining the mechanical response of the bulk material in the diffuse interface region. With regard to these assumptions, classic models such as the one by Voigt/Taylor or the one by Reuss/Sachs are included. Additionally, more sound homogenization approaches falling into the range of rank-1 convexification are also included in the unified framework. Based on a direct discretization of this time-continuous potential in time and space, an efficient numerical finite element implementation is proposed. In order to guarantee admissible order parameters of the phase field, the unconstrained optimization problem is supplemented by respective constraints. They are implemented by means of Lagrange parameters combined with the Fischer–Burmeister NCP functions. This results in an exact fulfillment of the aforementioned constraints without considering any inequality. Several numerical examples show the predictive capabilities as well as the robustness and efficiency of the final algorithmic formulation. Furthermore, the influence of the homogenization assumption is analyzed in detail. It is shown that the choice of the homogenization assumption does influence the predicted microstructure in general. However, all models converge to the same solution in the limiting case. © 2016 Elsevier B.V.view abstract doi: 10.1016/j.cma.2016.11.024 2017 • 155 **CFD simulation for internal coolant channel design of tapping tools to reduce tool wear**

Biermann, D. and Oezkaya, E.*CIRP Annals - Manufacturing Technology*66 109-112 (2017)This paper presents the analysis and controlled modification of the coolant flow in tapping processes by means of Computational Fluid Dynamics (CFD). First, a conventional straight flute tapping tool was analyzed and the results of the CFD simulation show, that the cutting edges are not sufficiently supplied with coolant. Therefore, the design of the internal coolant channels was modified based on these simulation results. To validate the CFD simulation, experimental tests were performed, using an optimized tool. The applied modifications lead to a reduction of the tool wear and an increase of the tool's performance of about 36% was achieved. © 2017view abstract doi: 10.1016/j.cirp.2017.04.024 2017 • 154 **Optimization of the operation characteristic of a highly stressed centrifugal compressor impeller using automated optimization and metamodeling methods**

Geller, M. and Schemmann, C. and Kluck, N.*Proceedings of the ASME Turbo Expo*2C-2017 (2017)The continuously rising global demand for energy together with simultaneously decreasing resources has made the topic of energy efficiency - and therefore optimization - one of the fundamental questions of our time. Turbomachinery is one of the most important parts of the process chain in nearly every case of energy conversion. This makes the turbomachine a promising approach point for optimizations. The special relevance of this topic in regard to the global challenge of climate change can be illustrated by a simple calculation: If the efficiency of a turbo compressor with a power consumption of 15MW is improved by one percent, approximately 2t CO2 per day or over 760t CO2 per year can be saved.1 This work describes the optimization of the operation characteristic of a highly stressed centrifugal compressor impeller with regard to the size of the operation range and the efficiency in the operation point. The base impeller used for this optimization has already been pre-optimized by classical engineering methods utilizing analytical and empirical models. Due to the high mechanical stress in these kind of turbo impellers, each design has to be checked for compliance with the structural constraints in addition to the fluid dynamic computations. This context results in a highly complex, multicriterial, high dimensional optimization problem. The main subjects of the presented work are a robust geometry generation and grid generation, a highly automated workflow for the computation of the operation characteristic and the mechanical results and the representation of the operation characteristic by scalar parameters. Utilizing these tools a DOE is performed and based on its results a metamodel is created. The optimization is carried out on the metamodel using a Particle Swarm algorithm The workflow presented in this work utilizes in-house preprocessing tools as well as the tools of the ANSYS Workbench. The operation characteristics are computed using an in-house tool to control the ANSYS CFX-Solver. The statistical and stochastic pre- and post-processing as well as the metamodeling are carried out in optiSLang. Copyright © 2017 ASME.view abstract doi: 10.1115/GT2017-63262 2017 • 153 **The PRIMPING routine—Tiling through proximal alternating linearized minimization**

Hess, S. and Morik, K. and Piatkowski, N.*Data Mining and Knowledge Discovery*31 1090-1131 (2017)Mining and exploring databases should provide users with knowledge and new insights. Tiles of data strive to unveil true underlying structure and distinguish valuable information from various kinds of noise. We propose a novel Boolean matrix factorization algorithm to solve the tiling problem, based on recent results from optimization theory. In contrast to existing work, the new algorithm minimizes the description length of the resulting factorization. This approach is well known for model selection and data compression, but not for finding suitable factorizations via numerical optimization. We demonstrate the superior robustness of the new approach in the presence of several kinds of noise and types of underlying structure. Moreover, our general framework can work with any cost measure having a suitable real-valued relaxation. Thereby, no convexity assumptions have to be met. The experimental results on synthetic data and image data show that the new method identifies interpretable patterns which explain the data almost always better than the competing algorithms. © 2017, The Author(s).view abstract doi: 10.1007/s10618-017-0508-z 2017 • 152 **Optimized growth and reorientation of anisotropic material based on evolution equations**

Jantos, D.R. and Junker, P. and Hackl, K.*Computational Mechanics*1-20 (2017)Modern high-performance materials have inherent anisotropic elastic properties. The local material orientation can thus be considered to be an additional design variable for the topology optimization of structures containing such materials. In our previous work, we introduced a variational growth approach to topology optimization for isotropic, linear-elastic materials. We solved the optimization problem purely by application of Hamilton’s principle. In this way, we were able to determine an evolution equation for the spatial distribution of density mass, which can be evaluated in an iterative process within a solitary finite element environment. We now add the local material orientation described by a set of three Euler angles as additional design variables into the three-dimensional model. This leads to three additional evolution equations that can be separately evaluated for each (material) point. Thus, no additional field unknown within the finite element approach is needed, and the evolution of the spatial distribution of density mass and the evolution of the Euler angles can be evaluated simultaneously. © 2017 Springer-Verlag GmbH Germanyview abstract doi: 10.1007/s00466-017-1483-3 2017 • 151 **Finite element analysis of combined forming processes by means of rate dependent ductile damage modelling**

Kiliclar, Y. and Vladimirov, I.N. and Wulfinghoff, S. and Reese, S. and Demir, O.K. and Weddeling, C. and Tekkaya, A.E. and Engelhardt, M. and Klose, C. and Maier, H.J. and Rozgic̀, M. and Stiemer, M.*International Journal of Material Forming*10 73-84 (2017)Sheet metal forming is an inherent part of todays production industry. A major goal is to increase the forming limits of classical deep-drawing processes. One possibility to achieve that is to combine the conventional quasi-static (QS) forming process with electromagnetic high-speed (HS) post-forming. This work focuses on the finite element analysis of such combined forming processes to demonstrate the improvement which can be achieved. For this purpose, a cooperation of different institutions representing different work fields has been established. The material characterization is based on flow curves and forming limit curves for low and high strain rates obtained by novel testing devices. Further experimental investigations have been performed on the process chain of a cross shaped cup, referring to both purely quasi-static and quasi-static combined with electromagnetic forming. While efficient mathematical optimization algorithms support the new viscoplastic ductile damage modelling to find the optimum parameters based on the results of experimental material characterization, the full process chain is studied by means of an electro-magneto-mechanical finite element analysis. The constitutive equations of the material model are integrated in an explicit manner and implemented as a user material subroutine into the commercial finite element package LS-DYNA. © 2015 Springer-Verlag Franceview abstract doi: 10.1007/s12289-015-1278-z 2017 • 150 **Optimal control of the thermistor problem in three spatial dimensions, part 2: Optimality conditions**

Meinlschmidt, H. and Meyer, C. and Rehberg, J.*SIAM Journal on Control and Optimization*55 2368-2392 (2017)This paper is concerned with the state-constrained optimal control of the threedimensional thermistor problem, a fully quasilinear coupled system of a parabolic and elliptic PDE with mixed boundary conditions. This system models the heating of a conducting material by means of direct current. Local existence, uniqueness, and continuity for the state system as well as existence of optimal solutions, admitting global-in-time solutions, to the optimization problem were shown in the the companion paper of this work. In this part, we address further properties of the set of controls whose associated solutions exist globally, such as openness, which includes analysis of the linearized state system via maximal parabolic regularity. The adjoint system involving measures is investigated using a duality argument. These results allow us to derive first-order necessary conditions for the optimal control problem in the form of a qualified optimality system in which we do not need to refer to the set of controls admitting global solutions. The theoretical findings are illustrated by numerical results. This work is the second of two papers on the three-dimensional thermistor problem. © 2017 Society for Industrial and Applied Mathematics.view abstract doi: 10.1137/16M1072656 2017 • 149 **Analytisch formulierte Näherungslösungen zur Grundwasserströmung bei einer Restwasserhaltung**

Perau, E. and Meteling, N.*Geotechnik*40 2-14 (2017)Analytical approximate solution for ground water flow at a residual water drainage system. If excavations are conducted which go below the groundwater table, it makes sense to embed the pit wall in a less permeable soil stratum and to operate a residual water drainage system. With such a construction water flows under the pit walls and so a flow field arises which has to be determined for various calculations and stability verifications. For instance, hydraulic gradients, discharge velocities as well as potential heads and pore-water pressures have to be calculated. These values are needed to determine the earth and water pressure distribution. They can also be used for verifications in relation to hydraulic failure, internal erosion and failure of the earth support, and also for the calculation of groundwater influx. Using the Finite Element Method (FEM) a systematic parameter study is conducted as the basis for formulating analytical approximation solutions, including theoretical boundary cases. It is possible to optimize the parametric study for both plane and axis-symmetrical states with isotropic and anisotropic subsoil and to do this by defining the hydraulic problem as a parameterized boundary value problem. By evaluating the mathematical characteristics of the boundary value problem and conducting a dimensional analysis it is possible to reduce the number of parameters considerably. Copyright © 2017 Ernst & Sohn Verlag für Architektur und technische Wissenschaften GmbH & Co. KG, Berlinview abstract doi: 10.1002/gete.201500032 2017 • 148 **DISMS2: A flexible algorithm for direct proteome- Wide distance calculation of LC-MS/MS runs**

Rieder, V. and Blank-Landeshammer, B. and Stuhr, M. and Schell, T. and Biß, K. and Kollipara, L. and Meyer, A. and Pfenninger, M. and Westphal, H. and Sickmann, A. and Rahnenführer, J.*BMC Bioinformatics*18 (2017)Background: The classification of samples on a molecular level has manifold applications, from patient classification regarding cancer treatment to phylogenetics for identifying evolutionary relationships between species. Modern methods employ the alignment of DNA or amino acid sequences, mostly not genome-wide but only on selected parts of the genome. Recently proteomics-based approaches have become popular. An established method for the identification of peptides and proteins is liquid chromatography-tandem mass spectrometry (LC-MS/MS). First, protein sequences from MS/MS spectra are identified by means of database searches, given samples with known genome-wide sequence information, then sequence based methods are applied. Alternatively, de novo peptide sequencing algorithms annotate MS/MS spectra and deduce peptide/protein information without a database. A newer approach independent of additional information is to directly compare unidentified tandem mass spectra. The challenge then is to compute the distance between pairwise MS/MS runs consisting of thousands of spectra. Methods: We present DISMS2, a new algorithm to calculate proteome-wide distances directly from MS/MS data, extending the algorithm compareMS2, an approach that also uses a spectral comparison pipeline. Results: Our new more flexible algorithm, DISMS2, allows for the choice of the spectrum distance measure and includes different spectra preprocessing and filtering steps that can be tailored to specific situations by parameter optimization. Conclusions: DISMS2 performs well for samples from species with and without database annotation and thus has clear advantages over methods that are purely based on database search. © 2017 The Author(s).view abstract doi: 10.1186/s12859-017-1514-2 2017 • 147 **Micromechanical modeling approach to derive the yield surface for BCC and FCC steels using statistically informed microstructure models and nonlocal crystal plasticity**

Vajragupta, N. and Ahmed, S. and Boeff, M. and Ma, A. and Hartmaier, A.*Physical Mesomechanics*20 343-352 (2017)In order to describe irreversible deformation during metal forming processes, the yield surface is one of the most important criteria. Because of their simplicity and efficiency, analytical yield functions along with experimental guidelines for parameterization become increasingly important for engineering applications. However, the relationship between most of these models and microstructural features are still limited. Hence, we propose to use micromechanical modeling, which considers important microstructural features, as a part of the solution to this missing link. This study aims at the development of a micromechanical modeling strategy to calibrate material parameters for the advanced analytical initial yield function Barlat YLD 2004-18p. To accomplish this, the representative volume element is firstly created based on a method making use of the statistical description of microstructure morphology as input parameter. Such method couples particle simulations to radical Voronoi tessellations to generate realistic virtual microstructures as representative volume elements. Afterwards, a nonlocal crystal plasticity model is applied to describe the plastic deformation of the representative volume element by crystal plasticity finite element simulation. Subsequently, an algorithm to construct the yield surface based on the crystal plasticity finite element simulation is developed. The primary objectives of this proposed algorithm are to automatically capture and extract the yield loci under various loading conditions. Finally, a nonlinear least square optimization is applied to determine the material parameters of Barlat YLD 2004-18p initial yield function of representative volume element, mimicking generic properties of bcc and fcc steels from the numerical simulations. © 2017, Pleiades Publishing, Ltd.view abstract doi: 10.1134/S1029959917030109 2017 • 146 **Measuring and Predicting Thermodynamic Limitation of an Alcohol Dehydrogenase Reaction**

Voges, M. and Fischer, F. and Neuhaus, M. and Sadowski, G. and Held, C.*Industrial and Engineering Chemistry Research*56 5535-5546 (2017)The knowledge of thermodynamic limitations on enzymatic reactions and of influencing factors thereon is essential for process optimization to increase space-time yields and to reduce the amount of solvent or energy consumption. In this work, the alcohol dehydrogenase (ADH) catalyzed reaction from acetophenone and 2-propanol to 1-phenylethanol and acetone in aqueous solution was investigated in a temperature range of 293.15-303.15 K at pH 7. It serves as a model reaction to demonstrate the use of biothermodynamics in order to investigate and predict limitations of enzymatic reactions. Experimental molalities of the reacting agents at equilibrium were measured yielding the position of reaction equilibrium (Km) at different reaction conditions (temperature, initial reactant molalities). The maximum initial acetophenone molality under investigation was 0.02 mol·kg-1 due to solubility limitations with a 1- to 50-fold excess of 2-propanol. It was shown that Km strongly depends on the initial reactant molalities as well as on reaction temperature. Experimental Km values were in the range of 0.20 to 0.49. Thermodynamic key properties (thermodynamic equilibrium constant, standard Gibbs energy and standard enthalpy of reaction) were determined by measured Km values and activity coefficients of the reacting agents predicted with the thermodynamic model ePC-SAFT. In addition, ePC-SAFT was used to predict Km at different initial molalities. Experimental and predicted results were in quantitative agreement (root-mean-square error of experimental versus predicted Km was 0.053), showing that ePC-SAFT is a promising tool to identify process conditions that might increase/decrease Km values and, thus, shift the position of reactions for industrial applications. © 2017 American Chemical Society.view abstract doi: 10.1021/acs.iecr.7b01228 2017 • 145 **Influence of Natural Solutes and Ionic Liquids on the Yield of Enzyme-Catalyzed Reactions: Measurements and Predictions**

Voges, M. and Fischer, C. and Wolff, D. and Held, C.*Organic Process Research and Development*21 1059-1068 (2017)The maximum yield of enzyme-catalyzed reactions is often limited by thermodynamic equilibrium. The knowledge of influencing factors on limitations of reactions is essential for process optimization to increase yields and to reduce solvent and energy consumption. In this work the effect of solvents/cosolvents [e.g., ionic liquid (IL)] and natural solutes on thermodynamic yield limitations of two enzyme-catalyzed model reactions were investigated, namely, an alcohol dehydrogenase (ADH) reaction (acetophenone + 2-propanol ⇌ 1-phenylethanol + acetone) and an alanine aminotransferase reaction (l-alanine + 2-oxoglutarate ⇌ pyruvate + l-glutamate). Experimental results showed that the equilibrium position and the equilibrium product yield of both reactions in aqueous single-phase systems strongly depend on the type and molality of the present natural solute/IL that were present as additives in the reaction mixture. In addition, the ADH reaction was investigated in pure IL and in an IL/buffer two-phase system. Compared to the aqueous reaction mixtures, the reactant solubility could be increased significantly, but at the cost of a lower product yield. Finally, thermodynamic modeling by means of ePC-SAFT was used to predict the equilibrium product yield of both reactions at different reaction conditions (natural solute/IL type and molality) in the aqueous mixtures as well as in the IL. Experimental and predicted results were in good agreement, showing that ePC-SAFT is a promising tool for predicting yield limitations in different reaction media. © 2017 American Chemical Society.view abstract doi: 10.1021/acs.oprd.7b00178 2016 • 144 **Finite element model updating using simulated annealing hybridized with unscented Kalman filter**

Astroza, R. and Nguyen, L.T. and Nestorović, T.*Computers and Structures*177 176-191 (2016)This paper proposes a method for finite element (FE) model updating of civil structures. The method is a hybrid global optimization algorithm combining simulated annealing (SA) with the unscented Kalman filter (UKF). The objective function in the optimization problem can be defined in the modal, time, or frequency domains. The algorithm improves the accuracy, convergence rate, and computational cost of the SA algorithm by local improvements of the accepted candidates though the UKF. The proposed methodology is validated using a mathematical function and numerically simulated response data from linear and nonlinear FE models of realistic three-dimensional structures. © 2016 Elsevier Ltdview abstract doi: 10.1016/j.compstruc.2016.09.001 2016 • 143 **Determination of force parameters for milling simulations by combining optimization and simulation techniques**

Freiburg, D. and Hense, R. and Kersting, P. and Biermann, D.*Journal of Manufacturing Science and Engineering, Transactions of the ASME*138 (2016)Milling is a machining process in which material removal occurs due to the rotary motion of a cutting tool relative to a typically stationary workpiece. In modern machining centers, up to and exceeding six degrees of freedom for motion relative to the tool and workpiece are possible, which results in a very complex chip and force formation. For the process layout, simulations can be used to calculate the occurring process forces, which are needed, e.g., for the prediction of surface errors of the workpiece, or for tool wear and process optimization examinations. One limiting factor for the quality of simulation results is the parametrization of the models. The most important parameters for milling simulations are the ones that calibrate the force model, as nearly every modeled process characteristic depends on the forces. This article presents the combination of a milling simulation with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm for the fast determination of force parameters that are valid for a wide range of process parameters. Experiments were conducted to measure the process forces during milling with different process parameters. The measured forces serve as basis for tests regarding the quality of the determined force parameters. The effect of the tool runout on the optimization result is also discussed, as this may have significant influence on the forces when using tools with more than one tooth. The article ends with a conclusion, in which some notes about the practical application of the algorithm are given. © 2016 by ASME.view abstract doi: 10.1115/1.4031336 2016 • 142 **An efficient PE-ALD process for TiO2 thin films employing a new Ti-precursor**

Gebhard, M. and Mitschker, F. and Wiesing, M. and Giner, I. and Torun, B. and De Los Arcos, T. and Awakowicz, P. and Grundmeier, G. and Devi, A.*Journal of Materials Chemistry C*4 1057-1065 (2016)An efficient plasma-enhanced atomic layer deposition (PE-ALD) process was developed for TiO2 thin films of high quality, using a new Ti-precursor, namely tris(dimethylamido)-(dimethylamino-2-propanolato)titanium(iv) (TDMADT). The five-coordinated titanium complex is volatile, thermally stable and reactive, making it a potential precursor for ALD and PE-ALD processes. Process optimization was performed with respect to plasma pulse length and reactive gas flow rate. Besides an ALD window, the application of the new compound was investigated using in situ quartz-crystal microbalance (QCM) to monitor surface saturation and growth per cycle (GPC). The new PE-ALD process is demonstrated to be an efficient procedure to deposit stoichiometric titanium dioxide thin films under optimized process conditions with deposition temperatures as low as 60°C. Thin films deposited on Si(100) and polyethylene-terephthalate (PET) exhibit a low RMS roughness of about 0.22 nm. In addition, proof-of-principle studies on TiO2 thin films deposited on PET show promising results in terms of barrier performance with oxygen transmission rates (OTR) found to be as low as 0.12 cm3 x cm-2 x day-1 for 14 nm thin films. © The Royal Society of Chemistry 2016.view abstract doi: 10.1039/c5tc03385c 2016 • 141 **An evolutionary topology optimization approach with variationally controlled growth**

Jantos, D.R. and Junker, P. and Hackl, K.*Computer Methods in Applied Mechanics and Engineering*310 780-801 (2016)Previous works of Junker and Hackl (2016) have presented a variational growth approach to topology optimization in which the problem of checkerboarding was suppressed by means of a discontinuous regularization scheme. This approach did not require additional filter techniques and also optimization algorithms were not needed any more. However, growth approaches to topology optimization demand some limitations in order to avoid a global and simultaneous generation of mass. The limitation has been achieved by a rather simple approach with restricted possibilities for controlling. In this contribution, we eliminate this drawback by introducing a Lagrange multiplier to control the total mass within the model space for each iteration step. This enables us to achieve directly controlled growth behavior and even find optimized structures for prescribed structure volumes. Furthermore, a modified growth approach, which we refer to as the Lagrange shift approach, results a numerically stable model that is easy to handle. After the derivation of the approach, we present numerical solutions for different boundary problems that demonstrate the potential of our model. © 2016 Elsevier B.V.view abstract doi: 10.1016/j.cma.2016.07.022 2016 • 140 **Optimization of artificial ground freezing in tunneling in the presence of seepage flow**

Marwan, A. and Zhou, M.-M. and Zaki Abdelrehim, M. and Meschke, G.*Computers and Geotechnics*75 112-125 (2016)Artificial ground freezing is an environmentally friendly technique to provide temporary excavation support and groundwater control during tunnel construction under difficult geological and hydrological ground conditions. Evidently, groundwater flow has a considerable influence on the freezing process. Large seepage flow may lead to large freezing times or even may prevent the formation of a closed frozen soil body. For safe and economic design of freezing operations, this paper presents a coupled thermo-hydraulic finite element model for freezing soils integrated within an optimization algorithm using the Ant Colony Optimization (ACO) technique to optimize ground freezing in tunneling by finding the optimal positions of the freeze pipe, considering seepage flow. The simulation model considers solid particles, liquid water and crystal ice as separate phases, and the mixture temperature and liquid pressure as primary field variables. Through two fundamental physical laws and corresponding state equations, the model captures the most relevant couplings between the phase transition associated with latent heat effect, and the liquid transport within the pores. The numerical model is validated by means of laboratory results considering different scenarios for seepage flow. As demonstrated in numerical simulations of ground freezing in tunneling in the presence of seepage flow connected with the ACO optimization algorithm, the optimized arrangement of the freeze pipes may lead to a substantial reduction of the freezing time and of energy costs. © 2016.view abstract doi: 10.1016/j.compgeo.2016.01.004 2016 • 139 **Unscented hybrid simulated annealing for fast inversion of tunnel seismic waves**

Nguyen, L.T. and Nestorović, T.*Computer Methods in Applied Mechanics and Engineering*301 281-299 (2016)A new hybridized global optimization method that combines simulated annealing global search with unscented Kalman filter minimization is proposed to solve waveform inversion for predicting ahead of the underground tunnel face. The authors demonstrate in this work fast and reliable convergence of this new algorithm through validation of optimization of a multi-minima test function and inversion of synthetic tunnel seismic waveforms to predict the geological structure ahead of the tunnel face. With regard to the engineering application, the successful identification of the true model by minimizing a multimodal misfit functional for wide feasible bounds of the model parameters confirms that waveform inversion by the improved global optimization method is promising for practical applications with real measurement data. © 2015 Elsevier B.V.view abstract doi: 10.1016/j.cma.2015.12.004 2016 • 138 **Power Management Optimization of a Fuel Cell/Battery/Supercapacitor Hybrid System for Transit Bus Applications**

Odeim, F. and Roes, J. and Heinzel, A.*IEEE Transactions on Vehicular Technology*65 5783-5788 (2016)In this paper, the optimization of a power management strategy of a fuel cell/battery/supercapacitor hybrid vehicular system is investigated, both offline and in real time. Two offline optimization algorithms, namely, dynamic programming and Pontryagin's minimum principle, are first compared. The offline optimum is used as a benchmark when designing a real-time strategy, which is an inevitable step since the offline optimum is not real-time capable and is oriented only toward minimizing hydrogen consumption, which may result in the unnecessary overloading of the battery. The design and optimization of the real-time strategy makes use of a multiobjective genetic algorithm while taking into account, apart from hydrogen consumption, other important factors, such as the slow dynamics of the fuel cell system and minimizing the battery power burden. As a result, the real-time strategy is found to consume slightly more hydrogen than the offline optimum; however, it dramatically improves system durability. © 2016 IEEE.view abstract doi: 10.1109/TVT.2015.2456232 2016 • 137 **Interpretable domain adaptation via optimization over the Stiefel manifold**

Pölitz, C. and Duivesteijn, W. and Morik, K.*Machine Learning*104 315-336 (2016)In domain adaptation, the goal is to find common ground between two, potentially differently distributed, data sets. By finding common concepts present in two sets of words pertaining to different domains, one could leverage the performance of a classifier for one domain for use on the other domain. We propose a solution to the domain adaptation task, by efficiently solving an optimization problem through Stochastic Gradient Descent. We provide update rules that allow us to run Stochastic Gradient Descent directly on a matrix manifold: the steps compel the solution to stay on the Stiefel manifold. This manifold encompasses projection matrices of word vectors onto low-dimensional latent feature representations, which allows us to interpret the results: the rotation magnitude of the word vector projection for a given word corresponds to the importance of that word towards making the adaptation. Beyond this interpretability benefit, experiments show that the Stiefel manifold method performs better than state-of-the-art methods. © 2016, The Author(s).view abstract doi: 10.1007/s10994-016-5577-5 2016 • 136 **Implementation of incremental variational formulations based on the numerical calculation of derivatives using hyper dual numbers**

Tanaka, M. and Balzani, D. and Schröder, J.*Computer Methods in Applied Mechanics and Engineering*301 216-241 (2016)In this paper, novel implementation schemes for the automatic calculation of internal variables, stresses and consistent tangent moduli for incremental variational formulations (IVFs) describing inelastic material behavior are proposed. IVFs recast inelasticity theory as an equivalent optimization problem where the incremental stress potential within a discrete time interval is minimized in order to obtain the values of internal variables. In the so-called Multilevel Newton-Raphson method for the inelasticity theory, this minimization problem is typically solved by using second derivatives with respect to the internal variables. In addition to that, to calculate the stresses and moduli further second derivatives with respect to deformation tensors are required. Compared with classical formulations such as the return mapping method, the IVFs are relatively new and their implementation is much less documented. Furthermore, higher order derivatives are required in the algorithms demanding increased implementation efforts. Therefore, even though IVFs are mathematically and physically elegant, their application is not standard. Here, novel approaches for the implementation of IVFs using HDNs of second and higher order are presented to arrive at a fully automatic and robust scheme with computer accuracy. The proposed formulations are quite general and can be applied to a broad range of different constitutive models, which means that once the proposed schemes are implemented as a framework, any other dissipative material model can be implemented in a straightforward way by solely modifying the constitutive functions. These include the Helmholtz free energy function, the dissipation potential function and additional side constraints such as e.g. the yield function in the case of plasticity. Its uncomplicated implementation for associative finite strain elasto-plasticity and performance is illustrated by some representative numerical examples. © 2015 Elsevier B.V.view abstract doi: 10.1016/j.cma.2015.12.010 2016 • 135 **Complexity analysis of simulations with analytic bond-order potentials**

Teijeiro, C. and Hammerschmidt, T. and Seiser, B. and Drautz, R. and Sutmann, G.*Modelling and Simulation in Materials Science and Engineering*24 (2016)The modeling of materials at the atomistic level with interatomic potentials requires a reliable description of different bonding situations and relevant system properties. For this purpose, analytic bond-order potentials (BOPs) provide a systematic and robust approximation to density functional theory (DFT) and tight binding (TB) calculations at reasonable computational cost. This paper presents a formal analysis of the computational complexity of analytic BOP simulations, based on a detailed assessment of the most computationally intensive parts. Different implementation algorithms are presented alongside with optimizations for efficient numerical processing. The theoretical complexity study is complemented by systematic benchmarks of the scalability of the algorithms with increasing system size and accuracy level of the BOP approximation. Both approaches demonstrate that the computation of atomic forces in analytic BOPs can be performed with a similar scaling as the computation of atomic energies. © 2016 IOP Publishing Ltd.view abstract doi: 10.1088/0965-0393/24/2/025008 2016 • 134 **Making the hydrogen evolution reaction in polymer electrolyte membrane electrolysers even faster**

Tymoczko, J. and Calle-Vallejo, F. and Schuhmann, W. and Bandarenka, A.S.*Nature Communications*7 (2016)Although the hydrogen evolution reaction (HER) is one of the fastest electrocatalytic reactions, modern polymer electrolyte membrane (PEM) electrolysers require larger platinum loadings (∼0.5-1.0 mg cm-2) than those in PEM fuel cell anodes and cathodes altogether (∼0.5 mg cm-2). Thus, catalyst optimization would help in substantially reducing the costs for hydrogen production using this technology. Here we show that the activity of platinum(111) electrodes towards HER is significantly enhanced with just monolayer amounts of copper. Positioning copper atoms into the subsurface layer of platinum weakens the surface binding of adsorbed H-intermediates and provides a twofold activity increase, surpassing the highest specific HER activities reported for acidic media under similar conditions, to the best of our knowledge. These improvements are rationalized using a simple model based on structure-sensitive hydrogen adsorption at platinum and copper-modified platinum surfaces. This model also solves a long-lasting puzzle in electrocatalysis, namely why polycrystalline platinum electrodes are more active than platinum(111) for the HER.view abstract doi: 10.1038/ncomms10990 2016 • 133 **A Framework for Multi-level Modeling and Optimization of Modular Hierarchical Systems**

Wagner, T. and Biermann, D.*Procedia CIRP*41 159-164 (2016)Most products and manufacturing systems (MS) have an inherent hierarchical structure. They are composed of multiple subsystems, such as machines, process components, or resources. In order to optimize the control parameters of such systems, manufacturing planners often follow a global black-box approach. The optimization, thus, neglects the hierarchical structure encoded in the model. All subsystems and their components have to meet individual constraints and show specific uncertainty in their output. By extracting the information, which modules violate the constraints, the optimization algorithm could focus on the parameters of this specific module. Moreover, the planner can define objectives evaluating the robustness or sensitivity of a specific solution based on the knowledge of the hierarchical dependencies and about the uncertainty in the outputs. To accomplish this, the structure of the optimized system must be known to the respective methods applied. In this paper, the dependencies of the subsystems are defined by means of a tree structure. Based on this structure, different possibilities to define and solve the corresponding optimization problem are introduced. In addition, a concept for addressing the robustness of an MS with regard to the uncertainty of the components within the optimization model is proposed. As a practical example, a hot compaction process for manufacturing thermoplastic composites is formalized using the tree structure. Individual nonlinear empirical models simulate the input-output behavior of each subsystem. Based on this formalization, the results of single- and multi-objective optimization methods are compared and their strengths and weaknesses are discussed. © 2016 The Authors.view abstract doi: 10.1016/j.procir.2015.12.050 2015 • 132 **Microfluidic detachment assay to probe the adhesion strength of diatoms**

Alles, M. and Rosenhahn, A.*Biofouling*31 469-480 (2015)Fouling release (FR) coatings are increasingly applied as an environmentally benign alternative for controlling marine biofouling. As the technology relies on removing fouling by water currents created by the motion of ships, weakening of adhesion of adherent organisms is the key design goal for improved coatings. In this paper, a microfluidic shear force assay is used to quantify how easily diatoms can be removed from surfaces. The experimental setup and the optimization of the experimental parameters to study the adhesion of the diatom Navicula perminuta are described. As examples of how varying the physico-chemical surface properties affects the ability of diatoms to bind to surfaces, a range of hydrophilic and hydrophobic self-assembled monolayers was compared. While the number of cells that attached (adhered) was barely affected by the coatings, the critical shear stress required for their removal from the surface varied significantly. © 2015 Taylor & Francis.view abstract doi: 10.1080/08927014.2015.1061655 2015 • 131 **Energy storage technologies as options to a secure energy supply**

Ausfelder, F. and Beilmann, C. and Bertau, M. and Bräuninger, S. and Heinzel, A. and Hoer, R. and Koch, W. and Mahlendorf, F. and Metzelthin, A. and Peuckert, M. and Plass, L. and Räuchle, K. and Reuter, M. and Schaub, G. and Sc...*Chemie-Ingenieur-Technik*87 17-89 (2015)The current energy system is subject to a profound change: A system, designed to cater to energy needs by supplying fossil fuels is now expected to shift to integrate ever larger amounts of renewable energies to achieve overall a more sustainable energy supply. The challenges arising from this paradigm change are currently most obvious in the area of electric power supply. However, it affects the entire energy system, albeit with different effects. Within the energy system, various independent grids fulfill the function to transport and distribute energy or energy carriers in order to address spatially different energy supply and demand situations. Temporal variations are currently addressed by just-in-time production of the required energy form. However, renewable energy sources generally supply their energy independently from any specific energy demand. Their contribution to the overall energy system is expected to increase significantly. Energy storage technologies also represent an option to compensate for a temporal difference in energy supply and demand. Energy storage systems have the ability for a controlled take-up of a certain amount of energy, storing this energy within a storage media on a relevant timescale and a controlled redispatch of the energy after a certain time delay. Energy storage systems can also be constructed as process chains by combinations of unit operations, each covering different aspects of those functions. Large-scale mechanical storage options for electrical power are currently almost exclusively pumped hydro storage. These systems might be complemented in the future by compressed-air storage and maybe liquid-air facilities. There are several electrochemical storage technologies currently under investigation for their suitability as large scale electrical energy storage in various stages of research, development, and demonstration. Thermal energy storage technologies are based on a large variety of storage principles: Sensible heat, latent heat (based on phase transitions), adsorption/desorption processes or on chemical reactions. The latter can be a route to permanent and loss-free storage of heat. Chemical energy storage systems are based on the energy contained within the chemical bonds of the respective storage molecules. These storage molecules can act as energy carriers. Equally well, these compounds can enter various industrial value chains in energy-intensive industrial sectors and are therefore in direct economic competition with established (fossil) supply routes for these compounds. Water electrolysis, producing hydrogen and oxygen, is and will be the key technology for the foreseeable future. Hydrogen can be transformed by various processes to other energy carriers of interest. These transformations make the stored energy accessible by different sectors of the energy system and/or as raw materials for energy-intensive industrial processes. Some functions of energy storage systems can be taken over by industrial processes. Within the overall energy system, chemical energy storage technologies open up opportunities to link, connect and interweave the various energy streams and sectors. While chemical energy storage offers a route for a stronger integration of renewable energy outside the power sector, it also creates new opportunities for increased flexibility, novel synergies and additional optimization. Several examples of specific energy utilization are discussed and evaluated with respect to energy storage applications. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.view abstract doi: 10.1002/cite.201400183 2015 • 130 **Energy-efficient resource allocation based on interference alignment in MIMO-OFDM cognitive radio networks**

El-Absi, M. and Ali, A. and El-Hadidy, M. and Kaiser, T.*Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST*156 534-546 (2015)In this paper,we propose an energy-efficient interference alignment (IA) based resource management algorithm for multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) cognitive radio (CR) systems. The proposed algorithm provides the secondary users (SUs) with the opportunity for underlay sharing of the primary system spectrum. The proposed algorithm ensures the quality-ofservice (QoS) of the primary system by guaranteeing the minimum transmission rate. The problem is formulated as a mixed-integer non-convex optimization problem, in which the objective is tomaximize the energy efficiency, and the constraints are the per-user power budget andQoS demand of the primary system.To tackle mixed-integer and non-convexity nature of the problem, we propose a sub-optimal energy-efficient algorithm through two successive steps. The first step schedules the subcarriers among the SUs based on IA while the second step iteratively allocates the power based on Dinkelbach’s scheme. Simulations reveal that the proposed algorithm achieves significant improvement in the energy efficiency compared to the traditional spectrum-efficient algorithm. © Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015.view abstract doi: 10.1007/978-3-319-24540-9_44 2015 • 129 **Interference Alignment with Frequency-Clustering for Efficient Resource Allocation in Cognitive Radio Networks**

El-Absi, M. and Shaat, M. and Bader, F. and Kaiser, T.*IEEE Transactions on Wireless Communications*14 7070-7082 (2015)In this paper, we investigate the resource management problem in orthogonal frequency division multiplexing (OFDM) based multiple-input multiple-output (MIMO) cognitive radio (CR) systems. We propose performing resource allocation based on interference alignment (IA) in order to improve the spectral efficiency of CR systems without affecting the quality of service of the primary system. IA plays a role in the proposed algorithm to enable the secondary users (SUs) to cooperate and share the available spectrum, which leads to a considerable increase in the spectral efficiency of CR systems. However, IA based spectrum sharing is restricted to a certain number of SUs per subcarrier in order to satisfy the IA feasibility conditions. Accordingly, the resource allocation problem is formulated as a mixed-integer optimization problem, which is considered an NP-hard problem. To reduce the computational complexity of the problem, a two-phases efficient sub-optimal algorithm is proposed. In the first phase, frequency-clustering is performed in order to satisfy the IA feasibility conditions, where each subcarrier is assigned to a feasible number of SUs. Whenever possible, frequency-clustering stage considers the fairness among the SUs. In the second stage, the available power is allocated among the subcarriers and SUs without violating the constraints that limit the maximum interference induced to the primary system. Simulation results show that IA with frequency-clustering achieves a significant sum rate increase compared to CR systems with orthogonal multiple access transmission techniques. © 2015 IEEE.view abstract doi: 10.1109/TWC.2015.2464371 2015 • 128 **Simulation based iterative post-optimization of paths of robot guided thermal spraying**

Hegels, D. and Wiederkehr, T. and Müller, H.*Robotics and Computer-Integrated Manufacturing*35 1-15 (2015)Robot-based thermal spraying is a production process in which an industrial robot guides a spray gun along a path in order to spray molten material onto a workpiece surface to form a coating of desired thickness. This paper is concerned with optimizing a given path of this sort by post-processing. Reasons for doing so are to reduce the thickness error caused by a not sufficiently precise design of the given path, to adapt the path to a changed spray gun or spray technology, to adapt the path to slight incremental changes of the workpiece geometry, or to smooth the path in order to improve its execution by the robot. An approach to post-optimization using the nonlinear conjugate gradient method is presented which employs a high-quality GPGPU-based simulation of the spray process for the evaluation of the coating thickness error and additionally taking care of the kinematic path quality. The number of computationally time-consuming calls of the simulation is kept low by analytically calculating estimates of gradients from a simplified material deposition model. A rigorous experimental evaluation on case studies of the mentioned applications shows that the method efficiently delivers improved paths which reduce the coating error on real free form surfaces considerably, i.e. the squared coating error is below 3.5% of the original value in every case study. © 2015 Elsevier Ltd. All rights reserved.view abstract doi: 10.1016/j.rcim.2015.02.002 2015 • 127 **Model-based multi-objective optimization: Taxonomy, multi-point proposal, toolbox and benchmark**

Horn, D. and Wagner, T. and Biermann, D. and Weihs, C. and Bischl, B.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*9018 64-78 (2015)Within the last 10 years, many model-based multi-objective optimization algorithms have been proposed. In this paper, a taxonomy of these algorithms is derived. It is shown which contributions were made to which phase of the MBMO process. A special attention is given to the proposal of a set of points for parallel evaluation within a batch. Proposals for four different MBMO algorithms are presented and compared to their sequential variants within a comprehensive benchmark. In particular for the classic ParEGO algorithm, significant improvements are obtained. The implementations of all algorithm variants are organized according to the taxonomy and are shared in the open-source R package mlrMBO. © Springer International Publishing Switzerland 2015view abstract doi: 10.1007/978-3-319-15934-8_5 2015 • 126 **Analyzing the BBOB results by means of benchmarking concepts**

Mersmann, O. and Preuss, M. and Trautmann, H. and Bischl, B. and Weihs, C.*Evolutionary Computation*23 161-185 (2015)We presentmethods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the “best” one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case. © 2015 by the Massachusetts Institute of Technology.view abstract doi: 10.1162/EVCO_a_00134 2015 • 125 **Adaptive optimal control of the obstacle problem**

Meyer, C. and Rademacher, A. and Wollner, W.*SIAM Journal on Scientific Computing*37 A918-A945 (2015)This article is concerned with the derivation of a posteriori error estimates for optimization problems subject to an obstacle problem. To circumvent the nondifferentiability inherent to this type of problem, we introduce a sequence of penalized but differentiable problems. We show differentiability of the central path and derive separate a posteriori dual weighted residual estimates for the errors due to penalization, discretization, and iterative solution of the discrete problems. The effectivity of the derived estimates and of the adaptive algorithm is demonstrated on two numerical examples. © 2015 Society for Industrial and Applied Mathematics.view abstract doi: 10.1137/140975863 2015 • 124 **Model update and real-time steering of tunnel boring machines using simulation-based meta models**

Ninić, J. and Meschke, G.*Tunnelling and Underground Space Technology*45 138-152 (2015)A method for the simulation supported steering of the mechanized tunneling process in real time during construction is proposed. To enable real-time predictions of tunneling induced surface settlements, meta models trained a priori from a comprehensive process-oriented computational simulation model for mechanized tunneling for a certain project section of interest are introduced. For the generation of the meta models, Artificial Neural Networks (ANN) are employed in conjunction with Particle Swarm Optimization (PSO) for the model update according to monitoring data obtained during construction and for the optimization of machine parameters to keep surface settlements below a given tolerance. To provide a rich data base for the training of the meta model, the finite element simulation model for tunneling is integrated within an automatic data generator for setting up, running and postprocessing the numerical simulations for a prescribed range of parameters. Using the PSO-ANN for the inverse analysis, i.e. identification of model parameters according to monitoring results obtained during tunnel advance, allows the update of the model to the actual geological conditions in real time. The same ANN in conjunction with the PSO is also used for the determination of optimal steering parameters based on target values for settlements in the forthcoming excavation steps. The paper shows the performance of the proposed simulation-based model update and computational steering procedure by means of a prototype application to a straight tunnel advance in a non-homogeneous soil with two soil layers separated by an inclined boundary. © 2014 Elsevier Ltd.view abstract doi: 10.1016/j.tust.2014.09.013 2015 • 123 **Solving phase equilibrium problems by means of avoidance-based multiobjectivization**

Preuss, M. and Wessing, S. and Rudolph, G. and Sadowski, G.*Springer Handbook of Computational Intelligence*1159-1171 (2015)Phase-equilibrium problems are good examples for real-world engineering optimization problems with a certain characteristic. Despite their low dimensionality, finding the desired optima is difficult as their basins of attraction are small and surrounded by the much larger basin of the global optimum, which unfortunately resembles a physically impossible and therefore unwanted solution. We tackle such problems by means of a multi-objectivization-assisted multimodal optimization algorithm which explicitly uses problem knowledge concerning where the sought solutions are not in order to find the desired ones. The method is successfully applied to three phase equilibrium problems and shall be suitable also for tackling difficult multimodal optimization problems from other domains. © Springer-Verlag Berlin Heidelberg 2015.view abstract doi: 10.1007/978-3-662-43505-2_58 2015 • 122 **Reverse engineering of fluid selection for thermodynamic cycles with cubic equations of state, using a compression heat pump as example**

Roskosch, D. and Atakan, B.*Energy*81 202--212 (2015)Fluid selection for thermodynamic cycles like refrigeration cycles, heat pumps or organic Rankine cycles remains an actual topic. Generally the search for a working fluid is based on experimental approaches or on a not very systematic trial and error approach, far from being elegant. An alternative method may be a theory based reverse engineering approach, proposed and investigated here: The design process should start with an optimal process and with (abstract) properties of the fluid needed to fit into this optimal process, best described by some general equation of state and the corresponding fluid-describing parameters. These should be analyzed and optimized with respect to the defined model process, which also has to be optimized simultaneously. From this information real fluids can be selected or even synthesized which have fluid defining properties in the optimum regime like critical temperature or ideal gas capacities of heat, allowing to find new worldng fluids, not considered so far. The number and kind of the fluid-defining parameters is mainly based on the choice of the used EOS (equation of state). The property model used in the present work is based on the cubic Peng-Robinson equation, chosen due to its moderate numerical expense, sufficient accuracy as well as a general availability of the fluid-defining parameters for many compounds. The considered model-process works between the temperature levels of 273.15 and 333.15 K and can be used as heat pump for supplying buildings with heat, typically. The objective functions are the COP (coefficient of performance) and the VHC (volumetric heating capacity) as a function of critical pressure, critical temperature, acentric factor and two coefficients for the temperature-dependent isobaric ideal gas heat capacity. Also, the steam quality at the compressor entrance has to be regarded as a problem variable. The results give clear hints regarding optimal fluid parameters of the analyzed process and deepen the thermodynamic understanding of the process. Finally, for the COP optimization a strategy for screening large databases is explained. Several fluids from different substance groups like hydrogen iodide (COP = 3.68), formaldehyde (3.61) or cyclopropane (3.42) were found to have higher COPs than the often used R134a (3.12). These fluids will also have to fulfill further criteria, prior to their usage, but the method appears to be a good base for fluid selection. (C) 2014 Elsevier Ltd. All rights reserved.view abstract doi: 10.1016/j.energy.2014.12.025 2015 • 121 **Development of a simulation-software for a hydrogen production process on a solar tower**

Säck, J.-P. and Roeb, M. and Sattler, C. and Pitz-Paal, R. and Heinzel, A.*Solar Energy*112 205-217 (2015)A simulation and control model for a two-step thermo-chemical water splitting cycle using metal oxids for the generation of hydrogen with a solar tower system as heat source has been developed. The simulation and control model consists of three main parts, the simulation of the solar flux distribution on the receiver, of the temperatures in the driven reactor modules and the produced hydrogen in the metal oxide.The results of the three parts of the simulation model have been evaluated by comparing and validating them with experimental data from the Hydrosol 100kWth pilot plant at the Plataforma Solar de Almería (PSA) in Spain.With the overall model of the hydrogen production plant that was created, an evaluation of the two-step thermochemical cycle process in combination with a solar tower system was performed. The model was used to perform parametric studies for the development of the plant and the operation strategies. For this purpose, a provision in the overall model was integrated. The simulation helps to reduce the frequency of using the flux measurement system and can be used for the heliostat field control, in particular for the temperature control in the solar chemical reactor modules. Because of these promising results the overall system model is being extended to enable a use as a control model with controller for the temperature control of the two core reactions in the process.The central control variable of the process control was the operating temperatures for the hydrogen production and the regeneration of the two modules. The process control with its PI controller turned out suitable to compensate diurnal changes of solar input power as well as certain statistical fluctuation due to cloud passage. At the same time the limits of the operability and controllability of the process became clear in terms of the minimum of solar power needed and maximum acceptable gradients.With this experience an operating strategy, the basic parameters of the system in operation, especially the starting up and shutdown procedures, regular operation and the response to disturbances were selected and optimized. With this operation/control strategy such a complex system can be operated in the future on a commercial scale automatically. The obtained results can also be adapted for other solar chemical processes. © 2014 Elsevier Ltd.view abstract doi: 10.1016/j.solener.2014.11.026 2015 • 120 **Design of 3D statistically similar Representative Volume Elements based on Minkowski functionals**

Scheunemann, L. and Balzani, D. and Brands, D. and Schröder, J.*Mechanics of Materials*90 185-201 (2015)In this paper an extended optimization procedure is proposed for the construction of statistically similar RVEs (SSRVEs) which are defined as artificial microstructures showing a lower complexity than the associated real microstructures. This enables a computationally efficient discretization required for numerical calculations of microscopic boundary value problems and leads therefore to more efficient computational two-scale schemes. The optimization procedure is staggered and consists of an outer and an inner optimization problem. The outer problem treats different types of morphology parameterizations, different sets of statistical measures and different sets of weighting factors needed in the inner problem to minimize differences of mechanical errors that compare the response of the SSRVE with a target (real) microstructure. The inner problem minimizes differences of statistical measures describing the microstructure morphology for fixed parameterization type, statistical measures and weighting factors. The main contribution here is the analysis of new microstructure descriptors based on tensor-valued Minkowski functionals, whose numerical calculation requires less time compared to e.g. lineal-path functions. Thereby, a more efficient inner optimization problem can be realized and thus, an automated solution of the outer optimization problem becomes more practicable. Representative examples demonstrate the performance of the proposed method. It turns out that the evaluation of objective functions formulated in terms of the Minkowski functionals is almost 2000 times faster than functions taking into account lineal-path functions. © 2015 Elsevier Ltd. All rights reserved.view abstract doi: 10.1016/j.mechmat.2015.03.005 2015 • 119 **Construction of statistically similar RVEs**

Scheunemann, L. and Balzani, D. and Brands, D. and Schröder, J.*Lecture Notes in Applied and Computational Mechanics*78 219-256 (2015)In modern engineering, micro-heterogeneous materials are designed to satisfy the needs and challenges in a wide field of technical applications. The effective mechanical behavior of these materials is influenced by the inherent microstructure and therein the interaction and individual behavior of the underlying phases. Computational homogenization approaches, such as the FE2 method have been found to be a suitable tool for the consideration of the influences of the microstructure. However, when real microstructures are considered, high computational costs arise from the complex morphology of the microstructure. Statistically similar RVEs (SSRVEs) can be used as an alternative, which are constructed to possess similar statistical properties as the realmicrostructure but are defined by a lower level of complexity. These SSRVEs are obtained from a minimization of differences of statistical measures and mechanical behavior compared with a real microstructure in a staggered optimization scheme, where the inner optimization ensures statistical similarity and the outer optimization problem controls themechanical comparativity of the SSRVE and the real microstructure. The performance of SSRVEs may vary with the utilized statistical measures and the parameterization of the microstructure of the SSRVE.With regard to an efficient construction of SSRVEs, it is necessary to consider statistical measures which can be computed in reasonable time and which provide sufficient information of the real microstructure.Minkowski functionals are analyzed as possible basis for statistical descriptors of microstructures and compared with other well-known statistical measures to investigate the performance. In order to emphasize the general importance of considering microstructural features by more sophisticated measures than basic ones, i.e. volume fraction, an analysis of upper bounds on the error of statistical measures and mechanical response is presented. © Springer International Publishing Switzerland 2015.view abstract doi: 10.1007/978-3-319-18242-1_9 2015 • 118 **Predicting thermal loading in NC milling processes**

Schweinoch, M. and Joliet, R. and Kersting, P.*Production Engineering*9 179-186 (2015)In dry NC milling, a significant amount of heat is introduced into the workpiece due to friction and material deformation in the shear zone. Time-varying contact conditions, relative tool–workpiece movement and continuous geometric change of the workpiece due to material removal lead to a perpetually changing inhomogeneous temperature distribution within the workpiece. This in turn subjects the workpiece to ongoing complex thermomechanical deformations. Machining such a thermally loaded and deformed workpiece to exact specifications may result in unacceptable shape deviations and thermal errors, which become evident only after dissipation of the introduced heat. This paper presents a hybrid simulation system consisting of a geometric multiscale milling simulation and a finite element method kernel for solving problems of linear thermoelasticity. By combination and back-coupling, the described system is capable of accurately modeling heat input, thermal dispersion, transient thermomechanical deformation and resulting thermal errors as they occur in NC milling processes. A prerequisite to accurately predicting thermomechanical errors is the correct simulation of the temperature field within the workpiece during the milling process. Therefore, this paper is subjected to the precise prediction of the transient temperature distribution inside the workpiece. © 2014, German Academic Society for Production Engineering (WGP).view abstract doi: 10.1007/s11740-014-0598-z 2015 • 117 **Investigation of Diaphragm Deflection of an Absolute MEMS Capacitive Polysilicon Pressure Sensor**

Walk, C. and Goehlich, A. and Giese, A. and Goertz, M. and Vogt, H. and Kraft, M.*Smart Sensors, Actuators, and Mems Vii; and Cyber Physical Systems*9517 95170T (2015)This paper deals with the characteristics of circular shaped polysilicon pressure sensor diaphragms operating in the nontactile mode. Using a phase shifting interferometer the main characteristics of diaphragms were investigated under applied pressure with respect to sensitivity, initial deflection and cavity height. Diaphragms with a thickness of 1 mu m and a diameter of 96 mu m were investigated in an intended pressure range of applied pressure of about 700 - 2000 hPa. Process parameters with major impact on performance and yield limitations were identified. These include the variance in diaphragm sensitivity and the impact of the variance of the sacrificial oxide layer defining the diaphragm cavity height on the contact pressure point. The sensitivity of these diaphragms including the variance was found to be -19.8 +/- 1.3 nm per 100 hPa. The impact of variance in the cavity height on the contact pressure point was found to be about 3.7 +/- 0.5 hPa per nm. Summarizing both impacts a maximum variation of the contact pressure point of more than 450 hPa is possible to occur considering a nominal deflection of 300 nm. By optimizing the process of diaphragm deposition the variance in the sensitivity of the diaphragm was decreased by a factor of 2. A semi - empirical formula was evaluated that describes the deflection including initial deflection due to intrinsic stress and the process variations. A validation to the experimental obtained deflection lines showed a good agreement with deviations of less than 2 % for radial ranges of maximum deflection.view abstract doi: 10.1117/12.2176188 2015 • 116 **Identification of fully coupled anisotropic plasticity and damage constitutive equations using a hybrid experimental-numerical methodology with various triaxialities**

Yue, Z.M. and Soyarslan, C. and Badreddine, H. and Saanouni, K. and Tekkaya, A.E.*International Journal of Damage Mechanics*24 683-710 (2015)A hybrid experimental-numerical methodology is presented for the parameter identification of a mixed nonlinear hardening anisotropic plasticity model fully coupled with isotropic ductile damage accounting for microcracks closure effects. In this study, three test materials are chosen: DP1000, CP1200, and AL7020. The experiments involve the tensile tests with smooth and notched specimens and two types of shear tests. The tensile tests with smooth specimens are conducted in different directions with respect to the rolling direction. This helps to determine the plastic anisotropy parameters of the material when the ductile damage is still negligible. Also, in-plane torsion tests with a single loading cycle are used to determine separately the isotropic and kinematic hardening parameters. Finally, tensile tests with notched specimens and Shouler and Allwood shear tests are used for the damage parameters identification. These are conducted until the final fracture with the triaxiality ratio• lying between 0 and 1 / 3 (i.e. 0• 1/3). The classical force-displacement curves are chosen as the experimental responses. However, for the tensile test with notched specimens, the distribution of displacement components is measured using a full field measurement technique (ARAMIS system). These experimental results are directly used by the identification methodology in order to determine the values of material parameters involved in the constitutive equations. The inverse identification methodology combines an optimization algorithm which is coded within MATLAB together with the finite element (FE) code ABAQUS/Explicit. After optimization, good agreement between experimental and numerically predicted results in terms of force-displacement curves is obtained for the three studied materials. Finally, the applicability and validity of the determined material parameters are proved with additional validation tests. © 2014 The Author(s) Reprints and permissions.view abstract doi: 10.1177/1056789514546578 2015 • 115 **Detection of elevated regions in surface images from laser beam melting processes**

Zur Jacobsmuhlen, J. and Kleszczynski, S. and Witt, G. and Merhof, D.*IECON 2015 - 41st Annual Conference of the IEEE Industrial Electronics Society*1270-1275 (2015)Laser Beam Melting (LBM) is a promising Additive Manufacturing technology that allows the layer-based production of complex metallic components suitable for industrial applications. Widespread application of LBM is hindered by a lack of quality management and process control. Elevated regions in produced layers pose a major risk to process stability as collisions between the powder coating mechanism and the part may occur, which cause damages to either one or even both. We train a classifier-based detector for elevated regions in laser exposure result images. For this purpose we acquire two high resolution layer images: one after laser exposure and another one after powder deposition for the next layer. Ground truth labels for critical regions are obtained from analysis of the latter, where elevated regions are not covered by powder. We compute dense descriptors (HOG, DAISY, LBP) on the surface image after laser exposure and compare their predictive power. The top five descriptor configurations are used to optimize parameters of Random Forest, Support Vector Machine and Stochastic Gradient Descent (SGD) classifiers. We validate the detectors with optimized parameters using cross-validation on 281 images from three build jobs. Using a DAISY descriptor with a SGD classifier we achieve a F1-score of 0.670. The presented method enables detection of elevated regions before powder coating is performed and can be extended to other surface inspection tasks in LBM layer images. Detection results can be used to assess LBM process parameters with respect to process stability during process design and for quality management in production. © 2015 IEEE.view abstract doi: 10.1109/IECON.2015.7392275 2014 • 114 **Construction of two- and three-dimensional statistically similar RVEs for coupled micro-macro simulations**

Balzani, D. and Scheunemann, L. and Brands, D. and Schröder, J.*Computational Mechanics*54 1269-1284 (2014)In this paper a method is presented for the construction of two- and three-dimensional statistically similar representative volume elements (SSRVEs) that may be used in computational two-scale calculations. These SSRVEs are obtained by minimizing a least-square functional defined in terms of deviations of statistical measures describing the microstructure morphology and mechanical macroscopic quantities computed for a random target microstructure and for the SSRVE. It is shown that such SSRVEs serve as lower bounds in a statistical sense with respect to the difference of microstructure morphology. Moreover, an upper bound is defined by the maximum of the least-square functional. A staggered optimization procedure is proposed enabling a more efficient construction of SSRVEs. In an inner optimization problem we ensure that the statistical similarity of the microstructure morphology in the SSRVE compared with a target microstructure is as high as possible. Then, in an outer optimization problem we analyze mechanical stress–strain curves. As an example for the proposed method two- and three-dimensional SSRVEs are constructed for real microstructure data of a dual-phase steel. By comparing their mechanical response with the one of the real microstructure the performance of the method is documented. It turns out that the quality of the SSRVEs improves and converges to some limit value as the microstructure complexity of the SSRVE increases. This converging behavior gives reason to expect an optimal SSRVE at the limit for a chosen type of microstructure parameterization and set of statistical measures. © 2014, Springer-Verlag Berlin Heidelberg.view abstract doi: 10.1007/s00466-014-1057-6 2014 • 113 **'Nearly' universally optimal designs for models with correlated observations**

Dette, H. and Pepelyshev, A. and Zhigljavsky, A.*Computational Statistics and Data Analysis*71 1103-1112 (2014)The problem of determining optimal designs for least squares estimation is considered in the common linear regression model with correlated observations. The approach is based on the determination of 'nearly' universally optimal designs, even in the case where the universally optimal design does not exist. For this purpose, a new optimality criterion which reflects the distance between a given design and an ideal universally optimal design is introduced. A necessary condition for the optimality of a given design is established. Numerical methods for constructing these designs are proposed and applied for the determination of optimal designs in a number of specific instances. The results indicate that the new 'nearly' universally optimal designs have good efficiencies with respect to common optimality criteria. © 2013 Elsevier Inc. All rights reserved.view abstract doi: 10.1016/j.csda.2013.02.002 2014 • 112 **Interference alignment with frequency-clustering for efficient resource allocation in cognitive radio networks**

El-Absi, M. and Shaat, M. and Bader, F. and Kaiser, T.*2014 IEEE Global Communications Conference*979-985 (2014)In this paper, the problem of resource allocation in overloaded orthogonal frequency division multiplexing (OFDM) based multiple-input multiple-output (MIMO) cognitive radio (CR) system is considered. The objective is to allocate the different subcarrier and distribute the available user power in order to maximize the CR system throughput. The interference induced to the primary system should not be harmful and hence, should not exceed the prescribed limit. Interference alignment (IA) technique is employed in order to achieve an efficient use of the available radio resources. Without affecting the quality of service of the primary system, IA enables the secondary users to share the available spectrum which increases the CR system degrees-of-freedom. Due to IA feasibility condition, the spectrum sharing with perfect IA is restricted to a certain number of user per subcarrier. Accordingly, the resource management problem is formulated as a mixed-integer optimization problem which is considered as an NP-hard problem. To reduce the computational complexity of the problem, a two-phase efficient sub-optimal algorithm is proposed. Frequency-clustering is performed in the first phase to the overcome IA feasibility conditions while the power is distributed among subcarriers in the second phase. Simulations show that IA technique achieves a significant sum-rate increase of CR systems compared with the traditional CR systems that use orthogonal multiple access transmission techniques. © 2014 IEEE.view abstract doi: 10.1109/GLOCOM.2014.7036936 2014 • 111 **Simulation based process optimization for the milling of light weight components**

Freiburg, D. and Odendahl, S. and Siebrecht, T. and Steiner, M. and Wagner, T. and Zabel, A.*Procedia CIRP*18 132-137 (2014)This paper is focused on the virtual five-axis milling of light weight components with respect to joining elements and reinforced as well as functionally enhanced structural parts. The underlying concept here consists of a geometrical model representing the time and space discretized milling process and its ever changing engagement situations. This geometric information is then used to calculate physical process properties like forces and temperatures, which are necessary for optimization tasks and for the reliable prediction of process results. Additionally a decision-tree based process planning system is presented, supporting the user in maintaining knowledge from real and virtual milling processes. © 2014 Elsevier B.V.view abstract doi: 10.1016/j.procir.2014.06.120 2014 • 110 **Carbon-based yolk-shell materials for fuel cell applications**

Galeano, C. and Baldizzone, C. and Bongard, H. and Spliethoff, B. and Weidenthaler, C. and Meier, J.C. and Mayrhofer, K.J.J. and Schüth, F.*Advanced Functional Materials*24 220-232 (2014)The synthesis of yolk-shell catalysts, consisting of platinum or gold-platinum cores and graphitic carbon shells, and their electrocatalytic stabilities are described. Different encapsulation pathways for the metal nanoparticles are explored and optimized. Electrochemical studies of the optimized AuPt, @C catalyst revealed a high stability of the encapsulated metal particles. However, in order to reach full activity, several thousand potential cycles are required. After the electrochemical surface area is fully developed, the catalysts show exceptionally high stability, with almost no degradation over approximately 30 000 potential cycles between 0.4 and 1.4 VRHE. Encapsulation of noble metals in graphitic hollow shells by hard templating is explored as a means for stabilizing fuel cell catalysts. Small platinum particles can be encapsulated, but the achievable loading is too small. Encapsulation of Au-Pt yolk-shell particles allows higher loading, and with such cores, stable catalysts could be produced. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.view abstract doi: 10.1002/adfm.201302239 2014 • 109 **The minimization of matrix logarithms: On a fundamental property of the unitary polar factor**

Lankeit, J. and Neff, P. and Nakatsukasa, Y.*Linear Algebra and Its Applications*449 28-42 (2014)We show that the unitary factor Up in the polar decomposition of a nonsingular matrix Z=UpH is a minimizer for both∥-Log(Q *Z)∥-and∥-sym*(Log(Q *Z))∥- over the unitary matrices QεU(n) for any given invertible matrix ZεCn n×, for any unitarily invariant norm and any n. We prove that Up is the unique matrix with this property to minimize all these norms simultaneously. As important tools we use a generalized Bernstein trace inequality and the theory of majorization. © 2014 Published by Elsevier Inc.view abstract doi: 10.1016/j.laa.2014.02.012 2014 • 108 **Optimization of primary printed batteries based on Zn/MnO2**

Madej, E. and Espig, M. and Baumann, R.R. and Schuhmann, W. and La Mantia, F.*Journal of Power Sources*261 356-362 (2014)Thin-film batteries based on zinc/manganese dioxide chemistry with gel ZnCl2 electrolyte were manufactured as single (1.5 V) and double (3.0 V) cells from electrodes printed on paper substrates covered with different polymeric insulating coatings. Their properties were evaluated by means of electrochemical impedance spectroscopy and chronopotentiometry. Best performing cells achieved capacities in the range of 3 mAh cm-2 during discharge with 100 μA current, corresponding approximately to C/100 discharge rate. The influence of the cell elements on the overvoltage was examined and suggestions for the optimization of their performance were postulated. In particular, it was observed that limitations in the delivered power were governed by the poor conductivity of the carbon current collector. An optimized cell was built and showed a 4-fold improvement in the power delivered at 1 mA. © 2014 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.jpowsour.2014.03.103 2014 • 107 **MISO beamforming for RFID systems via Second-Order Cone Programming**

Nagy, B. and Fawky, A. and Khaliel, M. and El-Hadidy, M. and Kaiser, T.*8th European Conference on Antennas and Propagation, EuCAP 2014*2808-2811 (2014)This paper presents a beamforming design for Multiple-Input Single-Output (MISO) Radio Frequency Identification (RFID) systems. The design insures good Quality of Service (QoS) as well as optimal total power transmitted by the reader. Furthermore, a solution using Second-Order Cone Programming (SOCP) for the RFID beamforming problem is proposed. This SOCP algorithm yields optimal beamforming weights for the antenna array of the RFID reader transmitter. the RFID-reader transmitter. Such weight optimization can be considered as guidelines for RFID engineers to implement a high QoS RFID system that can compete with the well established traditional barcode technology. Simulation results are provided to show the effectiveness of the proposed algorithm. © 2014 European Association on Antennas and Propagation.view abstract doi: 10.1109/EuCAP.2014.6902410 2014 • 106 **On Grioli's minimum property and its relation to Cauchy's polar decomposition**

Neff, P. and Lankeit, J. and Madeo, A.*International Journal of Engineering Science*80 209-217 (2014)In this paper we rediscover Grioli's important work on the optimality of the orthogonal factor in the polar decomposition in an euclidean distance framework. We also draw attention to recently obtained generalizations of this optimality property in a geodesic distance framework. © 2014 Elsevier Ltd. All rights reserved.view abstract doi: 10.1016/j.ijengsci.2014.02.026 2014 • 105 **A Logarithmic Minimization Property of the Unitary Polar Factor in the Spectral and Frobenius Norms**

Neff, P. and Nakatsukasa, Y. and Fischle, A.*Siam Journal on Matrix Analysis and Applications*35 1132--1154 (2014)The unitary polar factor Q = U-p in the polar decomposition of Z = U-p H is the minimizer over unitary matrices Q for both \\Log(Q*Z)\\(2) and its Hermitian part \\sym(*)(Log(Q*Z))\\(2) over both R and C for any given invertible matrix Z is an element of C-nxn and any matrix logarithm Log, not necessarily the principal logarithm log. We prove this for the spectral matrix norm for any n and for the Frobenius matrix norm for n <= 3. The result shows that the unitary polar factor is the nearest orthogonal matrix to Z not only in the normwise sense but also in a geodesic distance. The derivation is based on Bhatia's generalization of Bernstein's trace inequality for the matrix exponential and a new sum of squared logarithms inequality. Our result generalizes the fact for scalars that for any complex logarithm and for all z is an element of C\{0}min(v is an element of(-pi,pi]) vertical bar Log(C)(e(-i upsilon)z)|(2) = vertical bar log vertical bar z vertical bar vertical bar(2), min(upsilon is an element of(-pi,pi]) vertical bar Re Log(C)(e(-iv)z)|(2) = vertical bar log vertical bar z vertical bar vertical bar(2).view abstract doi: 10.1137/130909949 2014 • 104 **Power management optimization of fuel cell/battery hybrid vehicles with experimental validation**

Odeim, F. and Roes, J. and Wülbeck, L. and Heinzel, A.*Journal of Power Sources*252 333-343 (2014)Fuel cell hybrid vehicles offer a high-efficiency and low-emission substitute for their internal combustion engine counterparts. The hybridization significantly improves the fuel economy of the vehicle; however, exploiting the hybridization requires a well-designed power management strategy that optimally shares the power demand between the power sources. This paper deals with the optimization of power management strategy of a fuel cell/battery hybrid vehicle, both off-line and in real-time. A new formulation of the optimization problem for the real-time strategy is presented. The new approach allows the optimization of the controller over a set of driving cycles at once, which improves the robustness of the designed strategy. The real-time optimization is applied to two forms of real-time controllers: a PI controller based on Pontryagin's Minimum Principle with three parameters and a fuzzy controller with ten parameters. The results show that the PI controller can outperform the fuzzy controller, even though it has fewer parameters. The real-time controllers are designed by simulation and then validated by experiment. © 2013 Published by Elsevier Inc.view abstract doi: 10.1016/j.jpowsour.2013.12.012 2014 • 103 **Five-axis grinding of wear-resistant, thermally sprayed coatings on free-formed surfaces**

Rausch, S. and Biermann, D. and Kersting, P.*Production Engineering*8 423-429 (2014)The abrasive wear resistance of tribologically stressed free-formed surfaces can be increased with thermally sprayed tungsten carbide coatings. In order to improve the surface topographies and shape accuracies, the workpieces must be finished prior to industrial application. A suitable machining process is NC grinding on five-axis machining centres using abrasive mounted points. However, the high hardness of the applied coatings and the small diameter of the utilized tools pose a great challenge for the process design. In this paper both, the results of fundamental investigations on the grinding of tungsten carbide coatings as well as a process optimization for the finishing of a coated forming tool are presented. This includes the heat transfer into the coating and the tool wear during the grinding process as well as the wear behaviour of the coating in dependence of the generated surface topography. In order to achieve a smooth surface, elastic-bonded diamond tools were used during polishing in a multi-stage machining process. © 2014 German Academic Society for Production Engineering (WGP).view abstract doi: 10.1007/s11740-014-0537-z 2014 • 102 **Construction of statistically similar representative volume elements - Comparative study regarding different statistical descriptors**

Scheunemann, L. and Schröder, J. and Balzani, D. and Brands, D.*Procedia Engineering*81 1360-1365 (2014)Advanced high strength steels, such as dual-phase steel (DP steel), provide advantages for engineering applications compared to conventional high strength steel. The main constituents of DP steel on the microscopic level are martensitic inclusions embedded in a ferritic matrix. A way to include these heterogeneities on the microscale into the modeling of the material is the FE2- method. Herein, in every integration point of a macroscopic finite element problem a microscopic boundary value problem is attached, which consists of a representative volume element (RVE) often defined as a segment of a real microstructure. From this representation, high computational costs arise due to the complexity of the discretization which can be circumvented by the use of a Statistically Similar RVE (SSRVE), which is governed by similar statistical features as the real target microstructure but shows a lower complexity. For the construction of such SSRVEs, an optimization problem is constructed which consists of a least-square functional taking into account the differences of statistical measures evaluated for the real microstructure and the SSRVE. This functional is minimized to identify the SSRVE for which the similarity in a statistical sense is optimal. The choice of the statistical measures considered in the least-square functional however play an important role. We focus on the construction of SSRVEs based on the volume fraction, lineal-path function and spectral density and check the performance in virtual tests. Here the response of the individual SSRVEs is compared with the real target microstructure. Further higher order measures, some specific Minkowski functionals, are investigated regarding their applicability and efficiency in the optimization process. © 2014 The Authors. Published by Elsevier Ltd.view abstract doi: 10.1016/j.proeng.2014.10.157 2014 • 101 **Bionic optimization of concrete structures by evolutionary algorithms**

Schnellenbach-Held, M. and Habersaat, J.-E.*Structural Engineering International: Journal of the International Association for Bridge and Structural Engineering (IABSE)*24 229-235 (2014)Floor slabs represent a large volume of concrete in buildings. The goal of this research is to achieve a structure that has an optimized bearing capacity. The optimization implies economic efficiency and sustainability. This paper describes a bionic optimization process that is applied in a project of the German Research Foundation (DFG) Priority Programme called "Concrete light. Future concrete structures using bionic, mathematical and engineering formfinding principles". The project involves adaption of three different natural structures that lead to a natural flow of forces. These natural structures are (a) spider webs, (b) hollow parts of bones and (c) geometries of structures such as the bottom side of water lilies or seashells. This scientific paper deals with the implementation of an optimization process for a configuration of reinforcement inspired by a spider web. Evolutionary Algorithms (EAs) are used for the development and optimization of an innovative and useful configuration of reinforcement. The EAs use reproduction, mutation and selection as mechanisms, inspired by biological evolution, to solve technical problems gradient-free. In this project the EA is combined with physical nonlinear finite element analyses. The EA is embedded into a C# application, in which the slab structure is generated and the finite element programme is started. The quality of the results is characterized by the fitness of each individual (reinforcement configuration), which is, for this example, the midspan displacement of the generated slab multiplied by the steel volume per slab. Accordingly, the midspan displacement is to be minimized during the process, with the minimum possible amount of reinforcement. The optimization variables are the angles and the number of rebars per slab. Several constrains need to be included to get comparable results between the developed slabs and the conventional slabs with orthogonally configured reinforcement. This paper presents the results of an optimized reinforcement configuration thus found by EA and comparisons with the behaviour of conventional slabs with a similar reinforcement ratio.view abstract doi: 10.2749/101686614X13830790993564 2014 • 100 **Experimental and computational studies on the femoral fracture risk for advanced core decompression**

Tran, T.N. and Warwas, S. and Haversath, M. and Classen, T. and Hohn, H.P. and Jäger, M. and Kowalczyk, W. and Landgraeber, S.*Clinical Biomechanics*29 412-417 (2014)Background Two questions are often addressed by orthopedists relating to core decompression procedure: 1) Is the core decompression procedure associated with a considerable lack of structural support of the bone? and 2) Is there an optimal region for the surgical entrance point for which the fracture risk would be lowest? As bioresorbable bone substitutes become more and more common and core decompression has been described in combination with them, the current study takes this into account. Methods Finite element model of a femur treated by core decompression with bone substitute was simulated and analyzed. In-vitro compression testing of femora was used to confirm finite element results. Findings The results showed that for core decompression with standard drilling in combination with artificial bone substitute refilling, daily activities (normal walking and walking downstairs) are not risky for femoral fracture. The femoral fracture risk increased successively when the entrance point is located further distal. The critical value of the deviation of the entrance point to a more distal part is about 20 mm. Interpretation The study findings demonstrate that optimal entrance point should locate on the proximal subtrochanteric region in order to reduce the subtrochanteric fracture risk. Furthermore the consistent results of finite element and in-vitro testing imply that the simulations are sufficient. © 2014 Elsevier Ltd.view abstract doi: 10.1016/j.clinbiomech.2014.02.001 2014 • 99 **Statistical comparison of classifiers for multi-objective feature selection in instrument recognition**

Vatolkin, I. and Bischl, B. and Rudolph, G. and Weihs, C.*Studies in Classification, Data Analysis, and Knowledge Organization*47 171-178 (2014)Many published articles in automatic music classification deal with the development and experimental comparison of algorithms—however the final statements are often based on figures and simple statistics in tables and only a few related studies apply proper statistical testing for a reliable discussion of results and measurements of the propositions’ significance. Therefore we provide two simple examples for a reasonable application of statistical tests for our previous study recognizing instruments in polyphonic audio. This task is solved by multi-objective feature selection starting from a large number of up-to-date audio descriptors and optimization of classification error and number of selected features at the same time by an evolutionary algorithm. The performance of several classifiers and their impact on the pareto front are analyzed by means of statistical tests. © Springer International Publishing Switzerland 2014.view abstract doi: 10.1007/978-3-319-01595-8_19 2013 • 98 **Comparison of classical and sequential design of experiments in note onset detection**

Bauer, N. and Schiffner, J. and Weihs, C.*Studies in Classification, Data Analysis, and Knowledge Organization*501-509 (2013)Design of experiments is an established approach to parameter optimization of industrial processes. In many computer applications however it is usual to optimize the parameters via genetic algorithms. The main idea of this work is to apply design of experiment's techniques to the optimization of computer processes. The major problem here is finding a compromise between model validity and costs, which increase with the number of experiments. The second relevant problem is choosing an appropriate model, which describes the relationship between parameters and target values. One of the recent approaches here is model combination. In this paper a musical note onset detection algorithm will be optimized using design of experiments. The optimal algorithm parameter setting is sought in order to get the best onset detection accuracy.We try different design strategies including classical and sequential designs and compare several model combination strategies. © Springer International Publishing Switzerland 2013.view abstract doi: 10.1007/978-3-319-00035-0-51 2013 • 97 **Planning and optimisation of manufacturing process chains for functionally graded components-part 1: Methodological foundations**

Biermann, D. and Gausemeier, J. and Hess, S. and Petersen, M. and Wagner, T.*Production Engineering*7 657-664 (2013)Functional gradation denotes a continuous distribution of properties over at least one spatial dimension of a component made of a single material. This distribution is tailored with respect to the later intended application of the component (Biermann et al. in Proceedings of the 1st international conference on thermo-mechanically graded materials, collaborative research centre transregio 30, Verlag Wissenschaftliche Scripten, Auerbach, pp 195-200, 2012). The improved utilisation of the material enables light weight design and a reduced resource consumption, thus offering an alternative for modern composite materials. However, their production requires complex thermo-mechanically coupled manufacturing process chains that increase the effort for the holistic design. To realise the full potential of functional gradation, novel ways for the planning and analysis of the corresponding manufacturing process chains have to be developed. This contribution proposes methods for the description of functionally graded components, as well as the synthetisation and optimisation of their corresponding process chains. The process knowledge, models and methods required are consolidated in a comprehensive planning framework. © 2013 German Academic Society for Production Engineering (WGP).view abstract doi: 10.1007/s11740-013-0490-2 2013 • 96 **A novel adaptive focusing principle for scanning light stimulation systems down to 2μm resolution**

Bitzer, L.A. and Benson, N. and Schmechel, R.*Conference Record of the IEEE Photovoltaic Specialists Conference*642-646 (2013)A new principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following three steps: First, a reference point is introduced on a CMOS sensor to adjust the beam diameter. The distance between the light focusing optic and the reference point is then determined using a laser displacement sensor. In a second step, this displacement sensor is used to obtain the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point on the sample surface. They are determined by the height difference between the CMOS sensor and the sample topography. This principle is independent of optical or electrical sample properties, the used light source or the selected wavelength. Furthermore, the samples can be tilted, rough, bent or of different surface materials. The described focusing principle can be applied to any scanning light stimulation system. Here, it is implemented using an optical beam induced current (OBIC) setup with a laser light source. © 2013 IEEE.view abstract doi: 10.1109/PVSC.2013.6744233 2013 • 95 **A new adaptive light beam focusing principle for scanning light stimulation systems**

Bitzer, L.A. and Meseth, M. and Benson, N. and Schmechel, R.*Review of Scientific Instruments*84 (2013)In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell. © 2013 American Institute of Physics.view abstract doi: 10.1063/1.4791795 2013 • 94 **Development of efficient role-based sensor network applications with excel spreadsheets**

Boelmann, C. and Weis, T.*Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS*365-371 (2013)Natural scientists use large scale sensor networks for gathering and analyzing environmental data. However, the implementation work requires expert programmers. The problem is complicated by limited battery lifetime, processing power and memory capacity of the nodes, because this requires a low-level programming language. Since scientists are used to analyzing data with spreadsheets, researchers have studied the possibility of applying spreadsheet-based programming to sensor networks. The approaches so far either require a central server to execute the spreadsheet, or they execute a spreadsheet run-time on each node. The first approach causes higher communication cost since all data has to be routed to the central server and the second one causes computational overhead, because evaluating a spreadsheet is slower than executing handcrafted NesC-code. Hence, we present a spreadsheet driven tool-chain that can create efficient NesC-code and allows for simulation in the spreadsheet itself. The nodes have to recompute the spreadsheet formulas upon new data. However, we can avoid a large fraction of this recomputation by applying several optimization strategies during code generation. In our example scenario, sensor nodes compute the variance across a series of sensor readings. We can show that the optimizations save 65% CPU cycles and the code size decreases by 12% when compared to non-optimized execution of the spreadsheet. Thus, our approach can deliver an easy way of developing sensor network programs while yielding very efficient code. © 2013 IEEE.view abstract doi: 10.1109/ICPADS.2013.58 2013 • 93 **Delayed Decision feedback equalization with adaptive noise filtering for IEEE 802.11p**

Budde, R. and Kays, R.*IEEE Vehicular Networking Conference, VNC*17-23 (2013)In this paper a receiver making use of a complexity-optimized Decision feedback equalization (DFE) structure and adaptive noise filtering is presented. Vehicular transmission systems such as IEEE 802.11p are facing serious challenges originating from the vehicular channel. Due to the channels high Doppler frequencies and long path delays, channel estimation and equalization is a pivotal aspect of the receiver structure. Consequently, a wide variety of tracking concepts have been proposed to tackle the challenge of these time- and frequency-selective channels, ranging from the introduction of additional pilots to complex message passing algorithms. DFE inarguably allows best tracking of the channel's impulse response but is often rejected due to its alleged computational complexity. With the proposed Delayed DFE receiver concept, complex buffering structures are avoided while already existent transmitter structures can be reused. Due to the high precision of the obtained channel state information, further optimization can be applied to adaptively suppress channel noise. © 2013 IEEE.view abstract doi: 10.1109/VNC.2013.6737585 2013 • 92 **Parabolic control problems in measure spaces with sparse solutions**

Casas, E. and Clason, C. and Kunisch, K.*SIAM Journal on Control and Optimization*51 28-63 (2013)Optimal control problems in measure spaces lead to controls that have small support, which is desirable, e.g., in the context of optimal actuator placement. For problems governed by parabolic partial differential equations, well-posedness is guaranteed in the space of square-integrable measure-valued functions, which leads to controls with a spatial sparsity structure. A conforming approximation framework allows one to derive numerically accessible optimality conditions as well as convergence rates. In particular, although the state is discretized, the control problem can still be formulated and solved in the measure space. Numerical examples illustrate the structural features of the optimal controls. © 2013 Society for Industrial and Applied Mathematics.view abstract doi: 10.1137/120872395 2013 • 91 **Optimization of smart Heusler alloys from first principles**

Entel, P. and Siewert, M. and Gruner, M.E. and Chakrabarti, A. and Barman, S.R. and Sokolovskiy, V.V. and Buchelnikov, V.D.*Journal of Alloys and Compounds*577 S107-S112 (2013)The strong magnetoelastic interaction in ternary X2YZ Heusler alloys is reponsible for the appearance of magnetostructural phase transitions and related functional properties such as the magnetocaloric and magnetic shape-memory effects. Here, X and Y are transition metal elements and Z is usually an element from the III-V group. In order to discuss possibilities to optimize the multifunctional effects, we use density functional theory calculations from which the martensitic driving forces of the magnetic materials can be derived. We find that the electronic contribution arising from the band Jahn-Teller effect is one of the major driving forces. The ab initio calculations also give a hint of how to design new intermetallics with higher martensitic transformation temperatures compared to the prototype alloy system Ni-Mn-Ga. As an example, we discuss quarternary PtxNi 2-xMnGa alloys which have properties very similar to Ni-Mn-Ga but exhibit a higher maximal eigenstrain of 14%. © 2012 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.jallcom.2012.03.005 2013 • 90 **Interaction of phase transformation and magnetic properties of heusler alloys: A density functional theory study**

Entel, P. and Gruner, M.E. and Comtesse, D. and Wuttig, M.*JOM*65 1540-1549 (2013)The structural, electronic, and magnetic properties of functional Ni-Mn-Z (Z = Ga, In, Sn, and Sb) Heusler alloys are studied by first-principles and Monte Carlo tools. The ab initio calculations give a basic understanding of the underlying physics that are associated with the complex magnetic behavior arising from the competition of ferromagnetic and antiferromagnetic interactions with increasing chemical disorder in the super cell. This complex magnetic ordering is the driving mechanism of structural transformations. It also essentially determines the multifunctional properties of the Heusler alloys such as magnetic shape-memory and magnetocaloric effects. The thermodynamic properties can be calculated by using the ab initio magnetic exchange parameters in finite-temperature Monte Carlo simulations. The experimental entropy and specific heat changes across the magnetostructural transition are accurately reproduced by the Monte Carlo simulations. The predictive power of the first-principles calculations allows one to optimize the functional features by choosing optimal compositions. © 2013 The Minerals, Metals & Materials Society.view abstract doi: 10.1007/s11837-013-0757-2 2013 • 89 **B-and strong stationarity for optimal control of static plasticity with hardening**

Herzog, R. and Meyer, C. and Wachsmuth, G.*SIAM Journal on Optimization*23 321-352 (2013)Optimal control problems for the variational inequality of static elastoplasticity with linear kinematic hardening are considered. The control-to-state map is shown to be weakly directionally differentiable, and local optimal controls are proved to verify an optimality system of B-stationary type. For a modified problem, local minimizers are shown to even satisfy an optimality system of strongly stationary type. © 2013 Society for Industrial and Applied Mathematics.view abstract doi: 10.1137/110821147 2013 • 88 **Algorithms for the optimization of RBE-weighted dose in particle therapy**

Horcicka, M. and Meyer, C. and Buschbacher, A. and Durante, M. and Krämer, M.*Physics in Medicine and Biology*58 275-286 (2013)We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers. © 2013 Institute of Physics and Engineering in Medicine.view abstract doi: 10.1088/0031-9155/58/2/275 2013 • 87 **On the annealing mechanism of AuGe/Ni/Au ohmic contacts to a two-dimensional electron gas in GaAs/AlxGa1-xAs heterostructures**

Koop, E.J. and Iqbal, M.J. and Limbach, F. and Boute, M. and Van Wees, B.J. and Reuter, D. and Wieck, A.D. and Kooi, B.J. and Van Der Wal, C.H.*Semiconductor Science and Technology*28 (2013)Ohmic contacts to a two-dimensional electron gas (2DEG) in GaAs/Al xGa1 -xAs heterostructures are often realized by annealing of AuGe/Ni/Au that is deposited on its surface. We studied how the quality of this type of ohmic contact depends on the annealing time and temperature, and how optimal parameters depend on the depth of the 2DEG below the surface. Combined with transmission electron microscopy and energy-dispersive x-ray spectrometry studies of the annealed contacts, our results allow for identifying the annealing mechanism. We use this for proposing a model that can predict the optimal annealing time when our commonly applied recipe is used for a certain heterostructure at a certain temperature. © 2013 IOP Publishing Ltd.view abstract doi: 10.1088/0268-1242/28/2/025006 2013 • 86 **Optimal control for a wire-based storage retrieval machine**

Lalo, W. and Bruckmann, T. and Schramm, D.*Mechanisms and Machine Science*7 631-639 (2013)Wire-based Stewart-Gough platforms are known to allow fastmovements of the end-effector. But as for every robotic system, their performance and energy efficiency can be optimized by the generation of end-effector trajectories suited for that particular robot type. In this contribution the optimal control strategy is applied on an innovative wire-based storage-retrievalmachine in order to design time, power and energy optimal trajectories. © Springer Science+Business Media Dordrecht 2013.view abstract doi: 10.1007/978-94-007-4902-3_66 2013 • 85 **Online monitoring of the passivation breakthrough during deep reactive ion etching of silicon using optical plasma emission spectroscopy**

Leopold, S. and Mueller, L. and Kremin, C. and Hoffmann, M.*Journal of Micromechanics and Microengineering*23 (2013)We present optical emission spectroscopy (OES) as a technique for process optimization of the etch step during deep reactive ion etching of silicon. For specific process steps, the spectrum of optical plasma emission is investigated. Two specific wavelengths are identified (fluorine at 703.8 nm and CS compounds at 257.6 nm), which significantly change intensity during the etch step. Their intensity drop is used for the recognition of the passivation layer breakthrough. Thus, the net silicon etch time can be measured. This time can be used for process optimization. A structural analysis of the passivation layer shows its fragmentation during its breakthrough. The plasma-surface interaction and their correlation with the plasma emission are described. Within an application example, the passivation breakthrough is investigated in detail. For different process regimes, the residues of the fragmented passivation layer are analyzed by scanning electron microscopy. Residue densities of 14-38 μm -2 are fabricated. For silicon grass generation, the OES technique offers a versatile tool for the process optimization of the mask generating process within the first cycles. © 2013 IOP Publishing Ltd.view abstract doi: 10.1088/0960-1317/23/7/074001 2013 • 84 **On parameters optimization of dynamic weighted majority algorithm based on genetic algorithm**

Mejri, D. and Limam, M. and Weihs, C.*2013 5th International Conference on Modeling, Simulation and Applied Optimization, ICMSAO 2013*(2013)Dynamic weighted majority-Winnow (DWM-WIN) algorithm of [5] is a powerful classification method for non-stationary environments which copes with concept drifting data streams. DWM-WIN parameters setting in a training process impacts on the classification accuracy. Unfortunately, these parameters are randomly chosen and without any rational selection. The objective of this research study is to optimize the choice of these parameters. We use genetic algorithm (GA) of [6] as an optimization method in order to dynamically search for the best parameter values of DWM-WIN and improve the classification accuracy. To assess this optimized DWM-WIN algorithm, DWM-WIN is used as a fitness function in the GA. Based on 4 datasets from UCI data sets repository, simulations have shown that the proposed DWM-WIN-GA outperforms existing classification methods. © 2013 IEEE.view abstract doi: 10.1109/ICMSAO.2013.6552722 2013 • 83 **Optimal actuator and sensor placement based on balanced reduced models**

Nestorović, T. and Trajkov, M.*Mechanical Systems and Signal Processing*36 271-289 (2013)In this paper, we have considered the problem of optimal actuator and sensor placement for active large flexible structures and proposed a placement optimization method, which is based on balanced reduced models. It overcomes disadvantages arising from demanding numeric procedures related with high order structural models. Optimization procedure relies on H2 and H ∞ norms, as well as on controllability and observability Gramians, which are related to structural eigenmodes of interest. Suggested methods for calculating approximate norms are advantageous due to their feasibility with large structures, where exact calculation of norms would cause numeric problems. A rule for determining optimal actuator/sensor placement in relation with actuation load modeling has been derived and proven by examples. The optimization procedure was documented by several examples showing a good agreement between the results obtained using different placement indices. © 2013 Elsevier Ltd.view abstract doi: 10.1016/j.ymssp.2012.12.008 2013 • 82 **Preprocess-Optimization for Polypropylene Laser Sintered Parts**

Reinhardt, T. and Martha, A. and Witt, G. and Köhler, P.*Computer-Aided Design and Applications*11 49-61 (2013)Additive manufacturing delivers the opportunity to manufacture complex geometry with comparable effective effort. Nevertheless, comprehensive information for the sufficient configuration of the process and related parameters are still missing. Here joint researches of the chairs for Manufacturing Technology and Computer Aided Design at the University of Duisburg-Essen were carried out in order to receive detailed information about the influencing factors on part quality for polypropylene laser sintering parts. These experimental results provided the basis for the development of software supported applications for the preprocess optimization. © 2013 Copyright CAD Solutions, LLC.view abstract doi: 10.1080/16864360.2013.834138 2013 • 81 **Modeling of a thermoelectric generator for thermal energy regeneration in automobiles**

Tatarinov, D. and Koppers, M. and Bastian, G. and Schramm, D.*Journal of Electronic Materials*42 2274-2281 (2013)In the field of passenger transportation a reduction of the consumption of fossil fuels has to be achieved by any measures. Advanced designs of internal combustion engine have the potential to reduce CO2 emissions, but still suffer from low efficiencies in the range from 33% to 44%. Recuperation of waste heat can be achieved with thermoelectric generators (TEGs) that convert heat directly into electric energy, thus offering a less complicated setup as compared with thermodynamic cycle processes. During a specific driving cycle of a car, the heat currents and temperature levels of the exhaust gas are dynamic quantities. To optimize a thermoelectric recuperation system fully, various parameters have to be tested, for example, the electric and thermal conductivities of the TEG and consequently the heat absorbed and rejected from the system, the generated electrical power and the system efficiency. A Simulink model consisting of a package for dynamic calculation of energy management in a vehicle, coupled with a model of the thermoelectric generator system placed on the exhaust system, determines the drive-cycle-dependent efficiency of the heat recovery system, thus calculating the efficiency gain of the vehicle. The simulation also shows the temperature drop at the heat exchanger along the direction of the exhaust flow and hence the variation of the voltage drop of consecutively arranged TEG modules. The connection between the temperature distribution and the optimal electrical circuitry of the TEG modules constituting the entire thermoelectric recuperation system can then be examined. The simulation results are compared with data obtained from laboratory experiments. We discuss error bars and the accuracy of the simulation results for practical thermoelectric systems embedded in cars. © 2013 TMS.view abstract doi: 10.1007/s11664-013-2642-8 2013 • 80 **Acquisition and optimization of three-dimensional spray footprint profiles for coating simulations**

Wiederkehr, T. and Müller, H.*Journal of Thermal Spray Technology*22 1044-1052 (2013)For the simulation of thermal spray coating build-up and the prediction of the coating-thickness distribution on given workpieces, an accurate representation of the mass flow emitted from the spray torch is essential. For two-dimensional (2D) simulations, this flow function often is acquired by measuring the coating thickness in cross-sectional profiles of linear spray beads, and for 3D simulations, usually some form of rotationally symmetric normal distribution function is fitted to measured profile data. However, when using free-formed complex workpieces or arbitrary and nonuniform spray paths, more realistic, nonsymmetric, and 3D flow functions are required. We present an approach to acquire accurate and fully 3D flow distribution functions by measuring 3D coating profiles which result from spraying onto a flat surface with a stationary gun, and improving them by means of a developed optimization method that takes more precise cross-sectional measurements into account. This approach thus combines the advantages of the higher accuracy of 2D measurements while fully preserving the 3D characteristics of the measured profile. © 2013 ASM International.view abstract doi: 10.1007/s11666-013-9927-6 2013 • 79 **Extreme optical properties tuned through phase substitution in a structurally optimized biological photonic polycrystal**

Wu, X. and Erbe, A. and Raabe, D. and Fabritius, H.-O.*Advanced Functional Materials*23 3615-3620 (2013)Biological photonic structures evolved by insects provide inspiring examples for the design and fabrication of synthetic photonic crystals. The small scales covering the beetle Entimus imperialis are subdivided into irregularly shaped domains that mostly show striking colors, yet some appear colorless. The colors originate from photonic crystals consisting of cuticular material and air, which are geometrically separated by a triply periodic D-surface (diamond). The structure and orientation of the photonic crystals are charactized and it is shown that in colorless domains SiO2 substitutes the air. The experimental results are incorporated into a precise D-surface structure model used to simulate the photonic band structure. The study shows that the structural parameters in colored domains are optimized for maximum reflectivity by maximizing the stop gap width. The colorless domains provide a biological example of how the optical appearance changes through alteration of the refractive index contrast between the constituting phases. The cuticular photonic polycrystals formed by the beetle Entimus imperialis are a perfect example of a natural diamond-type triply periodic bicontinuous cubic structure with structural parameters that are optimized to open up the largest possible photonic stop gaps. Depending on whether the cuticular network is complemented by air or SiO2, the optical properties of individual domains vary from bright coloration to no coloration and transparency. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.view abstract doi: 10.1002/adfm.201203597 2013 • 78 **Atomic layer deposition of Er2O3 thin films from Er tris-guanidinate and water: Process optimization, film analysis and electrical properties**

Xu, K. and Chaudhuri, A.R. and Parala, H. and Schwendt, D. and Arcos, T.D.L. and Osten, H.J. and Devi, A.*Journal of Materials Chemistry C*1 3939-3946 (2013)For the first time, the combination of the homoleptic erbium tris-guanidinate metalorganic complex ([Er(NMe2-Guan)3]) simply with water yielded high quality Er2O3 thin films on Si(100) substrates employing the atomic layer deposition (ALD) process. The process optimization to grow good quality Er2O3 layers was performed by varying the Er precursor pulse time, water pulse time and purge time. The high reactivity of the Er compound towards water and good thermal stability in the temperature range of 150-275°C (ALD window) resulted in homogeneous, stoichiometric Er2O3 layers with high growth rates (1.1 Å per cycle) and the as-deposited films crystallized in the cubic phase. The saturation behavior at different temperatures in the ALD window and the linear dependence of film thickness as a function of precursor pulse time confirmed the true ALD process. The potential of Er2O 3 thin films as gate dielectrics was verified by performing capacitance-voltage (C-V) and current-voltage (I-V) measurements. Dielectric constants estimated from the accumulation capacitance were found to be in the range of 10-13 for layers of different thicknesses (15-30 nm). © 2013 The Royal Society of Chemistry.view abstract doi: 10.1039/c3tc30401a 2013 • 77 **Protein-selective adsorbers by molecular imprinting via a novel two-step surface grafting method**

Yin, D. and Ulbricht, M.*Journal of Materials Chemistry B*1 3209-3219 (2013)Molecularly imprinted polymers (MIP) offer in principle a robust, cost-efficient alternative to antibodies, but it is still a challenge to develop such materials for protein recognition. Here, we report the molecular imprinting of a functional polymeric hydrogel layer with lysozyme as the template in a two-step grafting procedure by a novel initiation approach on track-etched polyethylene terephthalate membrane surface. This is based on surface functionalization with aliphatic C-Br groups which can be used as an initiator for surface-initiated atom transfer radical polymerization (SI-ATRP) and photo-initiated copolymerization. At first, the scaffold poly(methacrylic acid) (PMAA) was obtained through SI-ATRP of poly(tert-butyl methacrylate) and subsequent hydrolysis. Thereafter, it was assembled with the template to form a stable PMAA/lysozyme complex. In the second step, a polyacrylamide (PAAm) hydrogel was synthesized via UV-initiated surface grafting/crosslinking copolymerization around the scaffold/protein complex. Finally, the template was eluted to yield the grafted hydrogel layer with binding sites having complementary size, shape and appropriate arrangement of the functional groups to rebind lysozyme. The selectivity of lysozyme recognition, relative to cytochrome C with a similar size and isoelectric point, was increased by optimization of the scaffold chain length, UV grafting/crosslinking time and the chemical crosslinking degree of the PAAm-based hydrogel. The feasibility for the development of protein MIP in a straightforward way by independent optimization of crucial parameters-structures of scaffold with functional groups and of the crosslinked hydrogel matrix-have been demonstrated. © The Royal Society of Chemistry.view abstract doi: 10.1039/c3tb20333f 2013 • 76 **Inverse identification of CDM model parameters for DP1000 steel sheets using a hybrid experimental-numerical methodology spanning various stress triaxiality ratios**

Yue, Z.M. and Soyarslan, C. and Badreddine, H. and Saanouni, K. and Tekkaya, A.E.*Key Engineering Materials*554-557 2103-2110 (2013)A hybrid experimental-numerical methodology is presented for the identification of the model parameters regarding a mixed hardening anisotropic finite plasticity fully coupled with isotropic ductile damage in which the micro-crack closure effect is accounted for, for DP1000 steel sheets. The experimental tests involve tensile tests with smooth and pre-notched specimens and shear tests using recently proposed specimen [16]. These tests cover stress triaxiality ratios lying between 0 (pure shear) and 1 / 3 (plane strain). To neutralize machine stiffness effects, displacements of the chosen material surface pixels are kept track of using the digital image correlation system ARAMIS, where recorded inputs are synchronized with force measurements. Advanced constitutive equations fully coupled with ductile damage implemented into ABQUS/Explicit using a user defined material subroutine VUMAT are used. 3D hexahedral elements (rather than thin shells elements) are used to model the tests and the identification methodology combines the FEM using the VUMAT together with experimental results using an appropriate inverse method in framework of MATLAB. The validity of the material model and transferability of its parameters are checked using tests involving complex strain paths. Copyright © 2013 Trans Tech Publications Ltd.view abstract doi: 10.4028/www.scientific.net/KEM.554-557.2103 2012 • 75 **Optimal control of coherent anti-Stokes Raman scattering image contrast**

Bergner, G. and Schlücker, S. and Kampe, B. and Dittrich, P. and Dietzek, B. and Popp, J.*Applied Physics Letters*100 (2012)Optimal control of coherent anti-Stokes Raman scattering (CARS) image contrast is reported. The setup combines an evolutionary strategy and a closed-loop feedback with a liquid-crystal spatial modulator to control the spectrum of the Stokes pulse within a CARS scheme to optimize the vibrational contrast of CARS images. The CARS excitation spectrum is optimized for image contrast at a pre-determined wavenumber position. The optimization feedback uses an image-contrast parameter generated from the image itself as the experimentally imposed fitness parameter. This strategy allows for enhancing the image contrast by a factor of up to 2.6. © 2012 American Institute of Physics.view abstract doi: 10.1063/1.4731205 2012 • 74 **Using NC-path deformation for compensating tool deflections in micromilling of hardened steel**

Biermann, D. and Krebs, E. and Sacharow, A. and Kersting, P.*Procedia CIRP*1 132-137 (2012)During the micromachining of hardened materials, the low stiffness of the milling tool results in an increased tool deflection which has a great influence on the shape and dimensional accuracy of the machined components. In order to compensate these deflections, an optimization method is presented in this paper. Based on measured form errors of the machined workpieces, the NC programs are optimized iteratively to reduce the shape deviations. To verify this method, experimental investigations were carried out by milling pockets in hardened steel. The results show a significant reduction of the tool deflection after the optimization. © 2012 The Authors.view abstract doi: 10.1016/j.procir.2012.04.022 2012 • 73 **Resampling methods for meta-model validation with recommendations for evolutionary computation**

Bischl, B. and Mersmann, O. and Trautmann, H. and Weihs, C.*Evolutionary Computation*20 249-275 (2012)Meta-modeling has become a crucial tool in solving expensive optimization problems. Much of the work in the past has focused on finding a good regression method to model the fitness function. Examples include classical linear regression, splines, neural networks, Kriging and support vector regression. This paper specifically draws attention to the fact that assessing model accuracy is a crucial aspect in the meta-modeling framework. Resampling strategies such as cross-validation, subsampling, bootstrapping, and nested resampling are prominent methods for model validation and are systematically discussed with respect to possible pitfalls, shortcomings, and specific features. A survey of meta-modeling techniques within evolutionary optimization is provided. In addition, practical examples illustrating some of the pitfalls associated with model selection and performance assessment are presented. Finally, recommendations are given for choosing a model validation technique for a particular setting. © 2012 by the Massachusetts Institute of Technology.view abstract doi: 10.1162/EVCO_a_00069 2012 • 72 **Optimizing the deposition of hydrogen evolution sites on suspended semiconductor particles using on-line photocatalytic reforming of aqueous methanol solutions**

Busser, G.W. and Mei, B. and Muhler, M.*ChemSusChem*5 2200-2206 (2012)The deposition of hydrogen evolution sites on photocatalysts is a crucial step in the multistep process of synthesizing a catalyst that is active for overall photocatalytic water splitting. An alternative approach to conventional photodeposition was developed, applying the photocatalytic reforming of aqueous methanol solutions to deposit metal particles on semiconductor materials such as Ga2O3 and (Ga0.6Zn0.4)(N 0.6O0.4). The method allows optimizing the loading of the co-catalysts based on the stepwise addition of their precursors and the continuous online monitoring of the evolved hydrogen. Moreover, a synergetic effect between different co-catalysts can be directly established. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.view abstract doi: 10.1002/cssc.201200374 2012 • 71 **A measure space approach to optimal source placement**

Clason, C. and Kunisch, K.*Computational Optimization and Applications*53 155-171 (2012)The problem of optimal placement of point sources is formulated as a distributed optimal control problem with sparsity constraints. For practical relevance, partial observations as well as partial and non-negative controls need to be considered. Although well-posedness of this problem requires a non-reflexive Banach space setting, a primal-predual formulation of the optimality system can be approximated well by a family of semi-smooth equations, which can be solved by a superlinearly convergent semi-smooth Newton method. Numerical examples indicate the feasibility for optimal light source placement problems in diffusive photochemotherapy. © 2011 Springer Science+Business Media, LLC.view abstract doi: 10.1007/s10589-011-9444-9 2012 • 70 **Advanced buckyball joints: Synthesis, complex formation and computational simulations of centrohexaindane-extended tribenzotriquinacene receptors for C 60 fullerene**

Henne, S. and Bredenkötter, B. and Dehghan Baghi, A.A. and Schmid, R. and Volkmer, D.*Dalton Transactions*41 5995-6002 (2012)The synthesis of a structurally optimized tribenzotriquinacene receptor 9 is described, which is extended by centrohexaindane moieties to give rise to a half-round concave ball bearing, with optimum shape complementarity towards C 60 fullerene. Spectroscopic investigations reveal that this novel host forms a 1:1 host-guest complex with C 60 with a complex stability constant of K 1 = 14550 ± 867 M -1, which is considerably higher than those reported for structurally related tribenzotriquinacene hosts reported previously. Both the suppression for binding of a second receptor (i.e. formation of a 2:1 host-guest complex) as well as the increase of complex stability of the 1:1 complex can be rationalized in terms of multiple additive van der Waals and π-π interactions between C 60 and the aromatic groups of the receptor, as revealed by DFT + D and force-field calculations. Combining results from spectroscopic and theoretical investigations leads to predictions in light of future receptor designs, which - apart from shape complementarity - will have to consider an optimized electronic match (i.e. partial charge transfer) between the receptor and the fullerene host. © 2012 The Royal Society of Chemistry.view abstract doi: 10.1039/c2dt12435a 2012 • 69 **An image space approach to Cartesian based parallel MR imaging with total variation regularization**

Keeling, S.L. and Clason, C. and Hintermüller, M. and Knoll, F. and Laurain, A. and von Winckel, G.*Medical Image Analysis*16 189-200 (2012)The Cartesian parallel magnetic imaging problem is formulated variationally using a high-order penalty for coil sensitivities and a total variation like penalty for the reconstructed image. Then the optimality system is derived and numerically discretized. The objective function used is non-convex, but it possesses a bilinear structure that allows the ambiguity among solutions to be resolved technically by regularization and practically by normalizing a pre-estimated norm of the reconstructed image. Since the objective function is convex in each single argument, convex analysis is used to formulate the optimality condition for the image in terms of a primal-dual system. To solve the optimality system, a nonlinear Gauss-Seidel outer iteration is used in which the objective function is minimized with respect to one variable after the other using an inner generalized Newton iteration. Computational results for in vivo MR imaging data show that a significant improvement in reconstruction quality can be obtained by using the proposed regularization methods in relation to alternative approaches. © 2011 Elsevier B.V.view abstract doi: 10.1016/j.media.2011.07.002 2012 • 68 **Growth optimization and characterization of lattice-matched Al 0.82In 0.18N optical confinement layer for edge emitting nitride laser diodes**

Kim-Chauveau, H. and Frayssinet, E. and Damilano, B. and De Mierry, P. and Bodiou, L. and Nguyen, L. and Vennéguès, P. and Chauveau, J.-M. and Cordier, Y. and Duboz, J.Y. and Charash, R. and Vajpeyi, A. and Lamy, J.-M. and Akhte...*Journal of Crystal Growth*338 20-29 (2012)We present the growth optimization and the doping by the metal organic chemical vapor deposition of lattice-matched Al 0.82In 0.18N bottom optical confinement layers for edge emitting laser diodes. Due to the increasing size and density of V-shaped defects in Al 1-xIn xN with increasing thickness, we have designed an Al 1-xIn xN/GaN multilayer structure by optimizing the growth and thickness of the GaN interlayer. The Al 1-xIn xN and GaN interlayers in the multilayer structure were both doped using the same SiH 4 flow, while the Si levels in both layers were found to be significantly different by SIMS. The optimized 8×(Al 0.82In 0.18N/GaN=54/6 nm) multilayer structures grown on free-standing GaN substrates were characterized by high resolution X-ray diffraction, atomic force microscopy and transmission electron microscopy, along with the in-situ measurements of stress evolution during growth. Finally, lasing was obtained from the UV (394 nm) to blue (436 nm) wavelengths, in electrically injected, edge-emitting, cleaved-facet laser diodes with 480 nm thick Si-doped Al 1-xIn xN/GaN multilayers as bottom waveguide claddings. © 2011 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.jcrysgro.2011.10.016 2012 • 67 **Tuning and evolution of support vector kernels**

Koch, P. and Bischl, B. and Flasch, O. and Bartz-Beielstein, T. and Weihs, C. and Konen, W.*Evolutionary Intelligence*5 153-170 (2012)Kernel-based methods like Support Vector Machines (SVM) have been established as powerful techniques in machine learning. The idea of SVM is to perform a mapping from the input space to a higher-dimensional feature space using a kernel function, so that a linear learning algorithm can be employed. However, the burden of choosing the appropriate kernel function is usually left to the user. It can easily be shown that the accuracy of the learned model highly depends on the chosen kernel function and its parameters, especially for complex tasks. In order to obtain a good classification or regression model, an appropriate kernel function in combination with optimized pre- and post-processed data must be used. To circumvent these obstacles, we present two solutions for optimizing kernel functions: (a) automated hyperparameter tuning of kernel functions combined with an optimization of pre- and post-processing options by Sequential Parameter Optimization (SPO) and (b) evolving new kernel functions by Genetic Programming (GP). We review modern techniques for both approaches, comparing their different strengths and weaknesses. We apply tuning to SVM kernels for both regression and classification. Automatic hyperparameter tuning of standard kernels and pre- and post-processing options always yielded to systems with excellent prediction accuracy on the considered problems. Especially SPO-tuned kernels lead to much better results than all other tested tuning approaches. Regarding GP-based kernel evolution, our method rediscovered multiple standard kernels, but no significant improvements over standard kernels were obtained. © 2012 Springer-Verlag.view abstract doi: 10.1007/s12065-012-0073-8 2012 • 66 **New robust nonconforming finite elements of higher order**

Köster, M. and Ouazzi, A. and Schieweck, F. and Turek, S. and Zajac, P.*Applied Numerical Mathematics*62 166-184 (2012)We show that existing quadrilateral nonconforming finite elements of higher order exhibit a reduction in the order of approximation if the sequence of meshes is still shape-regular but consists no longer of asymptotically affine equivalent mesh cells. We study second order nonconforming finite elements as members of a new family of higher order approaches which prevent this order reduction. We present a new approach based on the enrichment of the original polynomial space on the reference element by means of nonconforming cell bubble functions which can be removed at the end by static condensation. Optimal estimates of the approximation and consistency error are shown in the case of a Poisson problem which imply an optimal order of the discretization error. Moreover, we discuss the known nonparametric approach to prevent the order reduction in the case of higher order elements, where the basis functions are defined as polynomials on the original mesh cell. Regarding the efficient treatment of the resulting linear discrete systems, we analyze numerically the convergence of the corresponding geometrical multigrid solvers which are based on the canonical full order grid transfer operators. Based on several benchmark configurations, for scalar Poisson problems as well as for the incompressible Navier-Stokes equations (representing the desired application field of these nonconforming finite elements), we demonstrate the high numerical accuracy, flexibility and efficiency of the discussed new approaches which have been successfully implemented in the FeatFlow software (www.featflow.de). The presented results show that the proposed FEM-multigrid combinations (together with discontinuous pressure approximations) appear to be very advantageous candidates for efficient simulation tools, particularly for incompressible flow problems. © 2012 IMACS. Published by Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.apnum.2011.11.005 2012 • 65 **Separable approximate optimization of support vector machines for distributed sensing**

Lee, S. and Stolpe, M. and Morik, K.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*7524 LNAI 387-402 (2012)Sensor measurements from diverse locations connected with possibly low bandwidth communication channels pose a challenge of resource-restricted distributed data analyses. In such settings it would be desirable to perform learning in each location as much as possible, without transferring all data to a central node. Applying the support vector machines (SVMs) with nonlinear kernels becomes nontrivial, however. In this paper, we present an efficient optimization scheme for training SVMs over such sensor networks. Our framework performs optimization independently in each node, using only the local features stored in the respective node. We make use of multiple local kernels and explicit approximations to the feature mappings induced by them. Together they allow us constructing a separable surrogate objective that provides an upper bound of the primal SVM objective. A central coordination is also designed to adjust the weights among local kernels for improved prediction, while minimizing communication cost. © 2012 Springer-Verlag.view abstract doi: 10.1007/978-3-642-33486-3_25 2012 • 64 **Hydrous layer silicates as precursors for zeolites obtained through topotactic condensation: A review**

Marler, B. and Gies, H.*European Journal of Mineralogy*24 405-428 (2012)In the search for new synthesis routes of zeolites, the topotactic condensation of hydrous layer silicates shows promising results in generating novel zeolite materials with distinct framework types which might have new, interesting properties as, e.g., molecular sieves or form-selective catalysts. In order to manipulate and optimise the condensation process detailed knowledge of the crystal structures is essential. The layer silicates considered here are of a special type and can be designated as high-silica hydrous layer silicates, HLSs. The structures consist of a tetrahedral layer of interconnected [SiO 4]-units containing equal numbers of terminal silanol/siloxy groups on either side of the layer and of an inter-layer region where cations of low charge density (predominantly organic cations) and water molecules are located. A topotactic condensation of the layers performed at temperatures of around 500°C with simultaneous expulsion of the inter-layer constituents is able to form fully condensed, uninterrupted framework silicates. The topotactic conversion has so far been described rarely in comparison to the classical hydrothermal synthesis of zeolites. Nevertheless, several hydrous layer silicates with different layer topologies were successfully converted into zeolites of different framework types using this synthesis route: CAS (type material: EU-20, EU-20b), CDO (type material: CDS-1), FER (type material: siliceous ferrierite), MWW (type material: MCM-22), NSI (type material: NU-6(2)), RRO (type material: RUB-41), RWR (type material: RUB-24), SOD (type material: guest-free silica sodalite). Thereby, four new zeolite framework types were obtained which have not been synthesized, so far, by direct hydrothermal synthesis (CDO, NSI, RRO, RWR). This review gives an overview on the hydrous layer silicates being structurally characterized in detail, on the condensation process and on some properties of the resulting zeolite materials. © 2012 E. Schweizerbart'sche Verlagsbuchhandlung.view abstract doi: 10.1127/0935-1221/2012/0024-2187 2012 • 63 **Optimized RTD-HBT VCO design based on large signal transient simulations**

Munstermann, B. and Tchegho, A. and Keller, G. and Tegude, F.-J.*Conference Proceedings - International Conference on Indium Phosphide and Related Materials*32-35 (2012)This paper presents an optimized RTD-HBT single ended voltage controlled oscillator topology with increased tuning range and improved frequency stability at 20 GHz oscillation frequency. By connecting the RTD to the emitter of an HBT, a larger voltage swing at the parallel resonator compared to the direct connection can be achieved. In addition the RTD-capacitance influence on the oscillation frequency can be suppressed efficiently. Transient assisted harmonic balance simulations promise an increased oscillation power by 2 dB and a doubled tuning range of about 3.2 GHz compared to the conventional RTD-HBT circuits. © 2012 IEEE.view abstract doi: 10.1109/ICIPRM.2012.6403311 2012 • 62 **Experimental model identification and vibration control of a smart cantilever beam using piezoelectric actuators and sensors**

Nestorović, T. and Durrani, N. and Trajkov, M.*Journal of Electroceramics*29 42-55 (2012)Mechanical lightweight structures often tend to unwanted vibrations due to disturbances. Passive methods for increasing the structural damping are often inadequate for the vibration suppression, since they include additional mass in the form of damping materials, additional stiffening designs or mass damper. In this paper the concept of an active vibration control for piezoelectric light weight structures is introduced and presented through several subsequent steps: model identification, controller design, simulation, experimental verification and implementation on a particular object-piezoelectric smart cantilever beam. Special attention is paid to experimental testing and verification of the results obtained through simulations. The efficiency of the modeling procedure through the subspace-based system identification along with the efficiency of the designed optimal controller are proven based on the experimental verification, which results in vibration suppression to a very high extent not only in comparison with the uncontrolled case, but also in comparison with previously achieved results. The experimental work demonstrates a very good agreement between simulations and experimental results. © 2012 Springer Science+Business Media, LLC.view abstract doi: 10.1007/s10832-012-9736-1 2012 • 61 **On matching short LDPC codes with spectrally-efficient modulation**

Nowak, S. and Kays, R.*IEEE International Symposium on Information Theory - Proceedings*493-497 (2012)The combination of low-density parity-check (LDPC) codes and higher-order modulation does usually neither require an interleaver in between nor elaborated signal mapping design rules. However, as the constellation size increases, a matching interleaver can be employed to compensate performance losses, especially in case of structured codes. This interleaver can be generally described by mapping distributions. In this paper, we discuss optimized mapping distributions for short LDPC codes, i.e. in the order of 10 2-10 3 bits. We point out that mapping distributions optimized by means of density evolution may not be suited for short codes. An alternative framework is introduced that utilizes an extended version of extrinsic information transfer functions. Their visualization feature helps to identify good mapping distributions for short codes. © 2012 IEEE.view abstract doi: 10.1109/ISIT.2012.6284238 2012 • 60 **Dynamic forming limits and numerical optimization of combined quasi-static and impulse metal forming**

Taebi, F. and Demir, O.K. and Stiemer, M. and Psyk, V. and Kwiatkowski, L. and Brosius, A. and Blum, H. and Tekkaya, A.E.*Computational Materials Science*54 293-302 (2012)Subject of this work is the incorporation of forming limits in the numerical optimization of technological forming processes for sheet metal. Forming processes with non-linear load paths and strongly varying strain-rate, such as, e.g., combinations of deep drawing and electromagnetic forming are of particular interest. While in the latter impulse forming process inertial forces play a significant role, the first one is of quasi-static nature such that inertial forces may be neglected. Although classical forming limit diagrams provide an easily accessible method for the prediction of forming limits, they cannot be applied in situations involving pulsed loading along non-linear strain paths. Hence, they are extended to forming limit surfaces here. The target function to be minimized is computed via finite-element simulation. To avoid a large number of simulations, an interior point method is employed as optimization method. In this algorithm, forming limits appear via a logarithmic barrier function, which has to be computed sufficiently fast. The optimization algorithm is exemplarily applied to an identification problem for a two-stage forming process. © 2011 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.commatsci.2011.10.008 2012 • 59 **Influence of handling parameters on coating characteristics in order to produce near-net-shape wear resistant coatings**

Tillmann, W. and Krebs, B.*Journal of Thermal Spray Technology*21 644-650 (2012)The present study investigates the influence of spray torch handling parameters such as the spray angle, spray distance, track pitch, and gun velocity on the deposition rate and the microstructure of atmospheric plasma sprayed WC-12Co coatings as well as twin wire arc sprayed WSC-Fe coatings. Similarities as well as fundamental differences in the sensitivity of the two spray processes, regarding changes in handling parameters are discussed, using results of light microscopic analyses. Both coating systems show distinct changes of the deposition rate when varying the handling parameters. An empirical model could be determined to describe the coating deposition. This model enables an optimization of path planning processes by reducing the number of optimization loops. However, the coatings show visible changes in the microstructure, which have to be taken into consideration in order to guarantee the production of high quality coatings. © ASM International.view abstract doi: 10.1007/s11666-012-9735-4 2012 • 58 **A strategy for the synthesis of mesostructured metal oxides with lower oxidation states**

Tüysüz, H. and Weidenthaler, C. and Schüth, F.*Chemistry - A European Journal*18 5080-5086 (2012)A detailed study on the pseudomorphic conversion of ordered mesoporous Co 3O 4 and ferrihydrite into CoO and Fe 3O 4, respectively, by using alcohol/water vapor as a gentle reducing agent is described. The reduction conditions for the transformation were optimized. In addition, the first one-pot synthesis of mesostructured CoO by using nanocasting with cubic ordered silica as a hard template is demonstrated. As strong as an Ox: A detailed study on the pseudomorphic conversion of ordered mesoporous Co 3O 4 and ferrihydrite into CoO and Fe 3O 4, respectively, by using alcohol/water vapor as a gentle reducing agent is described. The reduction conditions for the transformation were optimized. In addition, the first one-pot synthesis of mesostructured CoO by using cubic ordered silica as a hard template is demonstrated. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.view abstract doi: 10.1002/chem.201103650 2012 • 57 **Multi-objective evolutionary feature selection for instrument recognition in polyphonic audio mixtures**

Vatolkin, I. and Preuß, M. and Rudolph, G. and Eichhoff, M. and Weihs, C.*Soft Computing*16 2027-2047 (2012)Instrument recognition is one of the music information retrieval research topics. This task becomes very challenging if several instruments are played simultaneously because of their varying physical characteristics: inharmonic attack noise, energy development during attack-decay-sustain-release envelope or overtone distribution. In our framework, we treat instrument detection as a machine-learning task based on a large amount of preprocessed audio features with target to build classification models. Since classification algorithms are very sensitive to feature input and the optimal feature set differs from instrument to instrument, we propose to run a multi-objective feature selection procedure before building of classification models. Two objectives are considered for evaluation: classification mean-squared error and feature rate (smaller amount of features stands for reduced costs and decreased risk of overfitting). The analysis of the extensive experimental study confirms that application of an evolutionary multi-objective algorithm is a good choice to optimize feature selection for music instrument identification. © 2012 Springer-Verlag.view abstract doi: 10.1007/s00500-012-0874-9 2011 • 56 **Optimizing the synthesis of cobalt-based catalysts for the selective growth of multiwalled carbon nanotubes under industrially relevant conditions**

Becker, M.J. and Xia, W. and Tessonnier, J.-P. and Blume, R. and Yao, L. and Schlögl, R. and Muhler, M.*Carbon*49 5253-5264 (2011)An industrially applicable cobalt-based catalyst was optimized for the production of multiwalled carbon nanotubes (CNTs) from ethene in a hot-wall reactor. A series of highly active Co-Mn-Al-Mg spinel-type oxides with systematically varied Co: Mn ratios was synthesized by precipitation and calcined at different temperatures. The addition of Mn drastically enhanced the catalytic activity of the Co nanoparticles resulting in an extraordinarily high CNTyield of up to 249 g CNT/gcat. All quaternary catalysts possessed an excellent selectivity towards the growth of CNTs. The detailed characterization of the obtained CNTs by electron microscopy, Raman spectroscopy and thermogravimetry demonstrated that a higher Mn content results in a narrower CNT diameter distribution, while the morphology of the CNTs and their oxidation resistance remains rather similar. The temperature- programmed reduction of the calcined precursors as well as in situ X-ray absorption spectroscopy investigations during the growth revealed that the remarkable promoting effect of the Mn is due to the presence of monovalent Mn (II) oxide in the working catalyst, which enhances the catalytic activity of the metallic Co nanoparticles by strong metal-oxide interactions. The observed correlations between the added Mn promoter and the catalytic performance are of high relevance for the production of CNTs on an industrial scale. © 2011 Elsevier Ltd. All rights reserved.view abstract doi: 10.1016/j.carbon.2011.07.043 2011 • 55 **A chloride resistant high potential oxygen reducing biocathode based on a fungal laccase incorporated into an optimized Os-complex modified redox hydrogel**

Beyl, Y. and Guschin, D.A. and Shleev, S. and Schuhmann, W.*Electrochemistry Communications*13 474-476 (2011)A chloride-resistant high-potential biocathode based on Trametes hirsuta laccase incorporated into an optimized Os-complex modified redox hydrogel (80 mV potential difference to the T1 Cu) is described. The bioelectrocatalytic activity towards O 2 reduction is due to an intimate access of the polymer-bound Os-complex to the T1 Cu site. The chloride resistance of the biocathode is due to the tight binding of the polymer-bound Os-complex to the T1 Cu site. © 2011 Elsevier B.V.view abstract doi: 10.1016/j.elecom.2011.02.024 2011 • 54 **Optimal designs for indirect regression**

Biedermann, S. and Bissantz, N. and Dette, H. and Jones, E.*Inverse Problems*27 (2011)In many real life applications, it is impossible to observe the feature of interest directly. For example, non-invasive medical imaging techniques rely on indirect observations to reconstruct an image of the patient's internal organs. In this paper, we investigate optimal designs for such indirect regression problems. We use the optimal designs as benchmarks to investigate the efficiency of designs commonly used in applications. Several examples are discussed for illustration. Our designs provide guidelines to scientists regarding the experimental conditions at which the indirect observations should be taken in order to obtain an accurate estimate for the object of interest. Moreover, we demonstrate that in many cases the commonly used uniform design is close to optimal. © 2011 IOP Publishing Ltd.view abstract doi: 10.1088/0266-5611/27/10/105003 2011 • 53 **A study on micro-machining technology for the machining of NiTi: Five-axis micro-milling and micro deep-hole drilling**

Biermann, D. and Kahleyss, F. and Krebs, E. and Upmeier, T.*Journal of Materials Engineering and Performance*20 745-751 (2011)Micro-sized applications are gaining more and more relevance for NiTi-based shape memory alloys (SMA). Different types of micro-machining offer unique possibilities for the manufacturing of NiTi components. The advantage of machining is the low thermal influence on the workpiece. This is important, because the phase transformation temperatures of NiTi SMAs can be changed and the components may need extensive post manufacturing. The article offers a simulation-based approach to optimize five-axis micro-milling processes with respect to the special material properties of NiTi SMA. Especially, the influence of the various tool inclination angles is considered for introducing an intelligent tool inclination optimization algorithm. Furthermore, aspects of micro deep-hole drilling of SMAs are discussed. Tools with diameters as small as 0.5 mm are used. The possible length-to-diameter ratio reaches up to 50. This process offers new possibilities in the manufacturing of microstents. The study concentrates on the influence of the cutting speed, the feed and the tool design on the tool wear and the quality of the drilled holes. © ASM International.view abstract doi: 10.1007/s11665-010-9796-9 2011 • 52 **On the thermomechanical coupling in finite strain plasticity theory with non-linear kinematic hardening by means of incremental energy minimization**

Canadija, M. and Mosler, J.*International Journal of Solids and Structures*48 1120-1129 (2011)The thermomechanical coupling in finite strain plasticity theory with non-linear kinematic hardening is analyzed within the present paper. This coupling is of utmost importance in many applications, e.g., in those showing low cycle fatigue (LCF) under large strain amplitudes. Since the by now classical thermomechanical coupling originally proposed by Taylor and Quinney cannot be used directly in case of kinematic hardening, the change in heat as a result of plastic deformation is computed by applying the first law of thermodynamics. Based on this balance law, together with a finite strain plasticity model, a novel variationally consistent method is elaborated. Within this method and following Stainier and Ortiz (2010), all unknown variables are jointly and conveniently computed by minimizing an incrementally defined potential. In sharp contrast to previously published works, the evolution equations are a priori enforced by employing a suitable parameterization of the flow rule and the evolution equations. The advantages of this parameterization are, at least, twofold. First, it leads eventually to an unconstrained stationarity problem which can be directly applied to any yield function being positively homogeneous of degree one, i.e., the approach shows a broad range of application. Secondly, the parameterization provides enough flexibility even for a broad range of non-associative models such as kinematic hardening of Armstrong-Frederick-type. Different to Stainier and Ortiz (2010), the continuous variational problem is approximated by a standard, fully-implicit time integration. The applicability of the resulting numerical implementation is finally demonstrated by analyzing the thermodynamically coupled response for a loading cycle. © 2011 Elsevier Ltd.view abstract doi: 10.1016/j.ijsolstr.2010.12.018 2011 • 51 **Minimal invasion: An optimal L∞ state constraint problem**

Clason, C. and Ito, K. and Kunisch, K.*ESAIM: Mathematical Modelling and Numerical Analysis*45 505-522 (2011)In this work, the least pointwise upper and/or lower bounds on the state variable on a specified subdomain of a control system under piecewise constant control action are sought. This results in a non-smooth optimization problem in function spaces. Introducing a Moreau-Yosida regularization of the state constraints, the problem can be solved using a superlinearly convergent semi-smooth Newton method. Optimality conditions are derived, convergence of the Moreau-Yosida regularization is proved, and well-posedness and superlinear convergence of the Newton method is shown. Numerical examples illustrate the features of this problem and the proposed approach. © EDP Sciences, SMAI, 2010.view abstract doi: 10.1051/m2an/2010064 2011 • 50 **A duality-based approach to elliptic control problems in non-reflexive Banach spaces ***

Clason, C. and Kunisch, K.*ESAIM - Control, Optimisation and Calculus of Variations*17 243-266 (2011)Convex duality is a powerful framework for solving non-smooth optimal control problems. However, for problems set in non-reflexive Banach spaces such as L1(Ω) or BV(Ω), the dual problem is formulated in a space which has difficult measure theoretic structure. The predual problem, on the other hand, can be formulated in a Hilbert space and entails the minimization of a smooth functional with box constraints, for which efficient numerical methods exist. In this work, elliptic control problems with measures and functions of bounded variation as controls are considered. Existence and uniqueness of the corresponding predual problems are discussed, as is the solution of the optimality systems by a semismooth Newton method. Numerical examples illustrate the structural differences in the optimal controls in these Banach spaces, compared to those obtained in corresponding Hilbert space settings. © 2009 EDP Sciences, SMAI.view abstract doi: 10.1051/cocv/2010003 2011 • 49 **Optimal Experimental Design Strategies for Detecting Hormesis**

Dette, H. and Pepelyshev, A. and Wong, W.K.*Risk Analysis*31 1949-1960 (2011)Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations. © 2011 Society for Risk Analysis.view abstract doi: 10.1111/j.1539-6924.2011.01625.x 2011 • 48 **Optimal design for smoothing splines**

Dette, H. and Melas, V.B. and Pepelyshev, A.*Annals of the Institute of Statistical Mathematics*63 981-1003 (2011)In the common nonparametric regression model we consider the problem of constructing optimal designs, if the unknown curve is estimated by a smoothing spline. A special basis for the space of natural splines is introduced and the local minimax property for these splines is used to derive two optimality criteria for the construction of optimal designs. The first criterion determines the design for a most precise estimation of the coefficients in the spline representation and corresponds to D-optimality, while the second criterion is the G-optimality criterion and corresponds to an accurate prediction of the curve. Several properties of the optimal designs are derived. In general, D- and G-optimal designs are not equivalent. Optimal designs are determined numerically and compared with the uniform design. © The Institute of Statistical Mathematics, Tokyo 2009.view abstract doi: 10.1007/s10463-009-0265-x 2011 • 47 **Joint optimization of independent multiple responses**

Erdbrügge, M. and Kuhnt, S. and Rudak, N.*Quality and Reliability Engineering International*27 689-704 (2011)Most of the existing methods for the analysis and optimization of multiple responses require some kinds of weighting of these responses, for instance in terms of cost or desirability. Particularly at the design stage, such information is hardly available or will rather be subjective. An alternative strategy uses loss functions and a penalty matrix that can be decomposed into a standardizing (data-driven) and a weight matrix. The effect of different weight matrices is displayed in joint optimization plots in terms of predicted means and variances of the response variables. In this article, we propose how to choose weight matrices for two and more responses. Furthermore, we prove the Pareto optimality of every point that minimizes the conditional mean of the loss function. © 2011 John Wiley & Sons, Ltd.view abstract doi: 10.1002/qre.1229 2011 • 46 **Geometrical abstraction of screw compressors for thermodynamic optimization**

Hauser, J. and Brümmer, A.*Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science*225 1399-1406 (2011)The construction and development of different rotor profiles is an important area in connection with the development of screw compressors for specific applications. Geometrical performance figures (using criteria to describe interdependencies of geometrical parameters for screw compressors) for profile optimization are used in order to achieve specific improvements in performance. During this process, rotor profiles and spatial parameters are the main factors. Compared to data derived from the front section of rotor profiles, these figures which also take spatial parameters into account provide a better evaluation of gap conditions and operating efficiency of the compressors under examination. © Authors 2011.view abstract doi: 10.1177/0954406210395884 2011 • 45 **Newest developments on the manufacture of helical profiles by hot extrusion**

Khalifa, N.B. and Tekkaya, A.E.*ASME 2011 International Manufacturing Science and Engineering Conference, MSEC 2011*1 459-463 (2011)The paper presents a new innovative direct extrusion process, Helical Profile Extrusion (HPE), which increases the flexibility of aluminum profile manufacturing processes. The application fields of such profiles can be seen in screw rotors for compressors and pumps. The investigations concentrate on experimental and numerical analyses by 3D-FEM simulations to analyze the influence of friction on the material flow in the extrusion die in order to find out the optimal parameters with reference to the twisting angle and contour accuracy. By means of FEM, the profile shape could be optimized by modifying the die design. The numerical results were validated by experiments. For these investigations, a common aluminum alloy AA6060 was used. The accuracy of the profile contour could be improved significantly. However, increasing the twist angle is limited due to geometrical aspects. Copyright © 2010 by ASME.view abstract doi: 10.1115/MSEC2011-50126 2011 • 44 **The evolution of laminates in finite crystal plasticity: A variational approach**

Kochmann, D.M. and Hackl, K.*Continuum Mechanics and Thermodynamics*23 63-85 (2011)The analysis and simulation of microstructures in solids has gained crucial importance, virtue of the influence of all microstructural characteristics on a material's macroscopic, mechanical behavior. In particular, the arrangement of dislocations and other lattice defects to particular structures and patterns on the microscale as well as the resultant inhomogeneous distribution of localized strain results in a highly altered stress-strain response. Energetic models predicting the mechanical properties are commonly based on thermodynamic variational principles. Modeling the material response in finite strain crystal plasticity very often results in a non-convex variational problem so that the minimizing deformation fields are no longer continuous but exhibit small-scale fluctuations related to probability distributions of deformation gradients to be calculated via energy relaxation. This results in fine structures that can be interpreted as the observed microstructures. In this paper, we first review the underlying variational principles for inelastic materials. We then propose an analytical partial relaxation of a Neo-Hookean energy formulation, based on the assumption of a first-order laminate microstructure, thus approximating the relaxed energy by an upper bound of the rank-one-convex hull. The semi-relaxed energy can be employed to investigate elasto-plastic models with a single as well as multiple active slip systems. Based on the minimization of a Lagrange functional (consisting of the sum of energy rate and dissipation potential), we outline an incremental strategy to model the time-continuous evolution of the laminate microstructure, then present a numerical scheme by means of which the microstructure development can be computed, and show numerical results for particular examples in single- and double-slip plasticity. We discuss the influence of hardening and of slip system orientations in the present model. In contrast to many approaches before, we do not minimize a condensed energy functional. Instead, we incrementally solve the evolution equations at each time step and account for the actual microstructural changes during each time step. Results indicate a reduction in energy when compared to those theories based on a condensed energy functional. © 2010 Springer-Verlag.view abstract doi: 10.1007/s00161-010-0174-5 2011 • 43 **Efficient adaptation of modulation and coding schemes in high quality home networks**

Koetz, H. and Kays, R.*Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)*6557 81-91 (2011)The increasing number of wireless devices exchanging high quality digital multimedia content within the home necessitates high rate, reliable transmission technologies, like IEEE 802.11n. Though providing very high throughput, IEEE 802.11n is not perfectly adapted to the requirements of a wireless multimedia transmission. However, the IEEE 802.11n amendment allows for efficient adaptation of modulation and coding schemes (MCS) to optimize achievable throughput and to fulfill quality of service requirements without any effort for the user. This paper describes an MCS adaptation scheme using information from both PHY and MAC layer with the objective to choose the optimal MCS in an autonomous, fast and robust way which can be applied to both a single link and an entire network. In contrast to existing adaptation schemes, it accounts for volatile channel conditions as well as increased collision probability in dense wireless home networks. © 2011 Springer-Verlag.view abstract doi: 10.1007/978-3-642-19167-1_8 2011 • 42 **MBE growth optimization of topological insulator Bi2Te 3 films**

Krumrain, J. and Mussler, G. and Borisova, S. and Stoica, T. and Plucinski, L. and Schneider, C.M. and Grützmacher, D.*Journal of Crystal Growth*324 115-118 (2011)We investigated the growth of the topological insulator Bi 2Te3 on Si(1 1 1) substrates by means of molecular-beam epitaxy (MBE). The substrate temperature as well as the Bi and Te beam-equivalent pressure (BEP) was varied in a large range. The structure and morphology of the layers were studied using X-ray diffraction (XRD), X-ray reflectivity (XRR) and atomic force microscopy (AFM). The layer-by-layer growth mode with quintuple layer (QL) as an unit is accomplished on large plateaus if the MBE growth takes place in a Te overpressure. At carefully optimized MBE growth parameters, we obtained atomically smooth, single-crystal Bi 2Te3 with large area single QL covering about 75% of the layer surface. Angular-resolved photoelectron spectroscopy reveals a linear energy dispersion of charge carriers at the surface, evidencing topologically insulating properties of the Bi2Te3 epilayers. © 2011 Elsevier B.V.view abstract doi: 10.1016/j.jcrysgro.2011.03.008 2011 • 41 **On saturation effects in the Neumann boundary control of elliptic optimal control problems**

Mateos, M. and Rösch, A.*Computational Optimization and Applications*49 359-378 (2011)A Neumann boundary control problem for a linear-quadratic elliptic optimal control problem in a polygonal domain is investigated. The main goal is to show an optimal approximation order for discretized problems after a postprocessing process. It turns out that two saturation processes occur: The regularity of the boundary data of the adjoint is limited if the largest angle of the polygon is at least 2π/3. Moreover, piecewise linear finite elements cannot guarantee the optimal order, if the largest angle of the polygon is greater than π/2. We will derive error estimates of order h α with α [1,2] depending on the largest angle and properties of the finite elements. Finally, numerical test illustrates the theoretical results. © 2009 Springer Science+Business Media, LLC.view abstract doi: 10.1007/s10589-009-9299-5 2011 • 40 **Transfer-matrix method for efficient ablation by pulsed laser ablation and nanoparticle generation in liquids**

Menéndez-Manjón, A. and Wagener, P. and Barcikowski, S.*Journal of Physical Chemistry C*115 5108-5114 (2011)Comparable low nanoparticle production is a weakness of femtosecond-pulsed laser ablation in liquids, but the process ablation rate can be maximized at optimal focusing conditions and liquid levels. Refraction at the air-liquid boundary, vaporization of the liquid, self-focusing, and optical breakdown in the liquid complicate the determination of these optimal parameters. A semiempirical method has been developed, allowing an a priori determination of the appropriate experimental setup (liquid layer over the target, focal length, and lens position) for efficient ablation. The presented work can be applied with high accuracy for tightly focused beams, whereas loosely focused ultrashort lasers should be avoided to induce effective fabrication of colloids via laser ablation in liquids. © 2011 American Chemical Society.view abstract doi: 10.1021/jp109370q 2011 • 39 **Exploratory landscape analysis**

Mersmann, O. and Bischl, B. and Trautmann, H. and Preuss, M. and Weihs, C. and Rudolph, G.*Genetic and Evolutionary Computation Conference, GECCO'11*829-836 (2011)Exploratory Landscape Analysis (ELA) subsumes a number of techniques employed to obtain knowledge about the properties of an unknown optimization problem, especially insofar as these properties are important for the performance of optimization algorithms. Where in a first attempt, one could rely on high-level features designed by experts, we approach the problem from a different angle here, namely by using relatively cheap low-level computer generated features. Interestingly, very few features are needed to separate the BBOB problem groups and also for relating a problem to high-level, expert designed features, paving the way for automatic algorithm selection. Copyright 2011 ACM.view abstract doi: 10.1145/2001576.2001690 2011 • 38 **Uniqueness criteria for the adjoint equation in state-constrained elliptic optimal control**

Meyer, C. and Panizzi, L. and Schiela, A.*Numerical Functional Analysis and Optimization*32 983-1007 (2011)The article considers linear elliptic equations with regular Borel measures as inhomogeneity. Such equations frequently appear in state-constrained optimal control problems. By a counter example of Serrin [18], it is known that, in the presence of non-smooth data, a standard weak formulation does not ensure uniqueness for such equations. Therefore several notions of solution have been developed that guarantee uniqueness. In this note, we compare different definitions of solutions, namely the ones of Stampacchia [19] and Boccardo-Galout [4] and the two notions of solutions of [2, 7], and show that they are equivalent. As side results, we reformulate the solution in the sense of [19], and prove the existence of solutions in the sense of [2, 4, 7] in case of mixed boundary conditions. Copyright © Taylor & Francis Group, LLC.view abstract doi: 10.1080/01630563.2011.587074 2011 • 37 **Variational principles in dissipative electro-magneto-mechanics: A framework for the macro-modeling of functional materials**

Miehe, C. and Rosato, D. and Kiefer, B.*International Journal for Numerical Methods in Engineering*86 1225-1276 (2011)This paper presents a general framework for the macroscopic, continuum-based formulation and numerical implementation of dissipative functional materials with electro-magneto-mechanical couplings based on incremental variational principles. We focus on quasi-static problems, where mechanical inertia effects and time-dependent electro-magnetic couplings are a priori neglected and a time-dependence enters the formulation only through a possible rate-dependent dissipative material response. The underlying variational structure of non-reversible coupled processes is related to a canonical constitutive modeling approach, often addressed to so-called standard dissipative materials. It is shown to have enormous consequences with respect to all aspects of the continuum-based modeling in macroscopic electro-magneto-mechanics. At first, the local constitutive modeling of the coupled dissipative response, i.e. stress, electric and magnetic fields versus strain, electric displacement and magnetic induction, is shown to be variational based, governed by incremental minimization and saddle-point principles. Next, the implications on the formulation of boundary-value problems are addressed, which appear in energy-based formulations as minimization principles and in enthalpy-based formulations in the form of saddle-point principles. Furthermore, the material stability of dissipative electro-magneto-mechanics on the macroscopic level is defined based on the convexity/concavity of incremental potentials. We provide a comprehensive outline of alternative variational structures and discuss details of their computational implementation, such as formulation of constitutive update algorithms and finite element solvers. From the viewpoint of constitutive modeling, including the understanding of the stability in coupled electro-magneto-mechanics, an energy-based formulation is shown to be the canonical setting. From the viewpoint of the computational convenience, an enthalpy-based formulation is the most convenient setting. A numerical investigation of a multiferroic composite demonstrates perspectives of the proposed framework with regard to the future design of new functional materials. Copyright © 2011 John Wiley & Sons, Ltd.view abstract doi: 10.1002/nme.3127 2011 • 36 **Robustness and optimal use of design principles of arthropod exoskeletons studied by ab initio-based multiscale simulations**

Nikolov, S. and Fabritius, H. and Petrov, M. and Friák, M. and Lymperakis, L. and Sachs, C. and Raabe, D. and Neugebauer, J.*Journal of the Mechanical Behavior of Biomedical Materials*4 129-145 (2011)Recently, we proposed a hierarchical model for the elastic properties of mineralized lobster cuticle using (i) ab initio calculations for the chitin properties and (ii) hierarchical homogenization performed in a bottom-up order through all length scales. It has been found that the cuticle possesses nearly extremal, excellent mechanical properties in terms of stiffness that strongly depend on the overall mineral content and the specific microstructure of the mineral-protein matrix. In this study, we investigated how the overall cuticle properties changed when there are significant variations in the properties of the constituents (chitin, amorphous calcium carbonate (ACC), proteins), and the volume fractions of key structural elements such as chitin-protein fibers. It was found that the cuticle performance is very robust with respect to variations in the elastic properties of chitin and fiber proteins at a lower hierarchy level. At higher structural levels, variations of design parameters such as the volume fraction of the chitin-protein fibers have a significant influence on the cuticle performance. Furthermore, we observed that among the possible variations in the cuticle ingredients and volume fractions, the experimental data reflect an optimal use of the structural variations regarding the best possible performance for a given composition due to the smart hierarchical organization of the cuticle design. © 2010 Elsevier Ltd.view abstract doi: 10.1016/j.jmbbm.2010.09.015 2011 • 35 **Design of an optimal low-cost platform for actuating a driving simulator**

Pacurari, R. and Hesse, B. and Schramm, D.*IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM*469-474 (2011)This contribution proposes a mathematical model for an amplification mechanism (scissors mechanism) intended to be used as an actuating platform in the setup of a driving simulator. The main goal is reproducing different types of road profiles. For this purpose, hydraulic actuation has been chosen by means of high forces and powers that need to be developed. An optimization method is presented (GA and hybrid functions) for the geometrical parameters of the mechanism and the effectiveness of the method is validated by the simulation results in MATLAB/Simulink. © 2011 IEEE.view abstract doi: 10.1109/AIM.2011.6027090 2011 • 34 **Optimized rotor pitch distributions for screw spindle vacuum pumps**

Pfaller, D. and Brümmer, A. and Kauder, K.*Vacuum*85 1152-1155 (2011)Screw spindle vacuum pumps are characterised by a high suction performance and the ability to achieve high pressure ratios. Screw spindle vacuum pumps have varying progressions for the rotor pitch gradient, depending on the manufacturer. From a scientific point of view, the question arises which rotor gradient along the rotors has to be preferred for a particular set of operating conditions with reference to the machine characteristics. To answer this question a simulation of the compression process in the screw spindle vacuum pump is performed. The simulation program is used to calculate an energy-specific optimal rotor pitch applying an evolutionary optimization approach. It turns out that - in contrast to actually available rotor geometries - a continuous increase in rotor pitch from the pressure to the suction side is not ideal. An optimized rotor pitch curve is presented and the underlying physical dependencies are clarified by means of pressure and mass flow diagrams. © 2011 Elsevier Ltd. All rights reserved.view abstract doi: 10.1016/j.vacuum.2011.03.002 2011 • 33 **Semi-smooth Newton method for an optimal control problem with control and mixed control-state constraints**

Rösch, A. and Wachsmuth, D.*Optimization Methods and Software*26 169-186 (2011)A class of optimal control problems for a linear parabolic partial differential equation with control and mixed control-state constraints is considered. For this problem, a projection formula is derived that is equivalent to the necessary optimality conditions. As a main result, the superlinear convergence of a semi-smooth Newton method is shown. Moreover, we show the numerical treatment and several numerical experiments. © 2011 Taylor & Francis.view abstract doi: 10.1080/10556780903548257 2011 • 32 **A broadband stacked patch antenna with enhanced antenna gain by an optimized ellipsoidal reflector for X-band applications**

Schulz, C. and Baer, C. and Musch, T. and Rolfes, I.*2011 IEEE International Conference on Microwaves, Communications, Antennas and Electronic Systems, COMCAS 2011*(2011)A broadband stacked patch antenna, based on an aperture coupled feeding with two patch elements is presented. The transmission line model is introduced as one possibility for the design of an aperture coupled antenna. 3D electromagnetic field simulations are used for optimization and allow a detailed investigation of the influence of the different antenna parameters on the return loss and the antenna gain. To minimize the effect of an undesired formed opposite side lobe of the antenna, resulting in a poor front-to-back ratio, an optimized ellipsoidal reflector is presented to focus the radiation in one main direction. A good return loss in a bandwidth of more than 4 GHz, i.e. 42 %, with a maximum antenna gain of 18.1 dBi is achieved. A first prototype verifies the simulation results and the functionality of the developed antenna design. © 2011 IEEE.view abstract doi: 10.1109/COMCAS.2011.6105928 2011 • 31 **Unburned gas temperature measurements in a surrogate Diesel jet via two-color toluene-LIF imaging**

Tea, G. and Bruneaux, G. and Kashdan, J.T. and Schulz, C.*Proceedings of the Combustion Institute*33 783-790 (2011)Non-intrusive temperature measurements of the unburned fuel/air mixture in vaporized Diesel jets have been performed using two-color toluene laser-induced fluorescence (LIF). This diagnostics technique exploits the temperature- dependent spectral shift of the LIF signal which occurs after ultraviolet (UV) excitation of toluene that is added as tracer to a non-fluorescing base fuel. The method requires the determination of the ratio of LIF intensities collected by two detectors separate spectral bands. In the current study, measurements were performed in a high-pressure, high-temperature cell capable of reproducing the thermodynamic conditions in the combustion chamber of a Diesel engine during the injection event. Various aspects of the experimental set-up and the data evaluation were optimized. The temperature sensitivity of the measurement strategy is optimum at temperatures below 700 K. Temperature data acquired from two-color LIF thermometry were compared to single-color toluene-LIF measurements using an adiabatic mixing model. The latter is determined from toluene LIF-based fuel concentration measurements, the evaporation enthalpy, and thermocouple measurements of the bath-gas/ambient cell temperature prior to fuel injection. Based on simultaneous measurements with two cameras using identical optical filters a methodology to optimize the image superposition and to minimize the statistical error was developed. These measurements also allowed to determine the 1 - σ precision of the two-color LIF measurement to be in the 20-40 K range. © 2010 Published by Elsevier Inc. on behalf of The Combustion Institute. All rights reserved.view abstract doi: 10.1016/j.proci.2010.05.074 2011 • 30 **ICP-RIE etching of self-aligned InP based HBTs with Cl2/N 2 chemistry**

Topaloglu, S. and Prost, W. and Tegude, F.-J.*Microelectronic Engineering*88 1601-1605 (2011)We report on a simple Inductively Coupled Plasma-Reactive Ion Etching (ICP-RIE) process with Cl2/N2 chemistry to process InP based, self-aligned HBTs with sub-micron emitters. Since the layer to be etched is in the range of 150 nm (the thickness of emitter cap and emitter layers), a low etch rate is beneficial. On the other hand, it is also necessary to use chemistries without hydrogen to prevent any possible hydrogen passivation. Therefore, in this work, Cl2/N2 chemistry is selected and a plasma process providing an etch rate of 120 nm/min is optimized. Not only the etch rate but also the electrical and the surface quality of the wafers are examined. It has been illustrated that the etch rate of the optimized process is uniform over the wafer and it is reproducible. In addition to that, it has been shown with electrical measurements that there is no degradation in the material quality. To test the optimized process, sub-micron HBTs are fabricated and the RF measurements have shown an fmax of 340 GHz which make them to be used in high speed communication systems. In addition to that, lower and controlled under etch gives better current gain distribution over the wafer leading better device models and resulting in better yield in MMICs. © 2011 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.mee.2011.02.056 2011 • 29 **Femtocell spectrum access underlaid in fractional frequency reused macrocell**

Zhao, Z. and Zheng, F. and Wilzeck, A. and Kaiser, T.*IEEE International Conference on Communications*(2011)This paper focuses on the Femtocell spectrum access problems when a certain amount of Femto Base Stations (FBS) are underlying in a two-tier cellular network, where the Macro Base Stations (MBS) are adopting Fractional Frequency Reuse (FFR) strategy. Under the first-tier outage constraints, the FBS transmission capacity in an analytic form has been presented. It is found via the relevant bound analysis that the optimal Femtocell spectrum access rate depends only on the basic configuration parameters of the Macrocell FFR, regardless the aspects of Femto transmit power and required QoS criteria. Consequently, the optimal Femtocell spectrum access algorithm is proposed in this paper, which achieves substantial gains over conventional algorithms. In the mean time, the maximal transmission capacity is not only related to the optimal access rate, but also to the system specific parameters such as Femto UE transmit power, target SIR and outage rate. Tuning these parameters enables the system to accommodate more number of FBSs simultaneously. Finally, the second-tier outage probability constraints are also derived, providing the optimal strategies for FBSs to maximize the second-tier throughput. © 2011 IEEE.view abstract doi: 10.1109/iccw.2011.5963546 2011 • 28 **Analysis of the influence of OFDM sidelobe interference on Femto rich systems**

Zhao, Z. and Thein, C. and Zheng, F. and Kaiser, T.*2011 International Symposium on Modeling and Optimization of Mobile, Ad Hoc, and Wireless Networks, WiOpt 2011*445-450 (2011)This paper considers the existence of sidelobe emission in OFDM based rich-Femtocell systems. By modeling the interference from the arbitrary-distributed Femtocells as shot-noise process, the paper analyzes the effect of sidelobe interference on the system performance metrics such as transmission capacity (TC) and area spectral efficiency (ASE). The numerical results calculated for three different OFDM schemes show that, under the circumstance of very high Femto deployment intensity, the rigorous constraints on first-tier, namely the Macrocell outage probability and the reduction of first-tier ASE, prohibit the Femtocells from excessive subband access. Thus the out-of-band emissions contribute significantly to the system performance loss. Further analysis has shown that poor channel condition can even worsen the situation, making the existence of sidelobe interference not ignorable. Among the three schemes, OQAM-OFDM is proved to achieve the best performance. Although OQAM-OFDM has higher requirements of implementation, it is reasonable to be considered as one of the candidate transmission schemes for future Femtocell applications. © 2011 IEEE.view abstract doi: 10.1109/WIOPT.2011.5930061 2010 • 27 **Medical optimal decision making under uncertainty without assuming independence of symptoms**

Al-Qaysi, I. and Unland, R. and Weihs, C. and Branki, C.*Proceedings - 2nd International Conference on Intelligent Networking and Collaborative Systems, INCOS 2010*238-243 (2010)Efficiency and accuracy are imperative aspects in the world of medical diagnosis; for this reason, we have developed a medical diagnosis system based on holonic multi agent system. Holonic multi agent medical diagnosis system combines the advantages of the holonic paradigm, multi agent system technology, and swarm intelligence in order to realize a highly reliable, adaptive, scalable, flexible, and robust Internet- based diagnosis system for diseases. This paper also handles an important assumption in Baye's theorem. Clustering and discriminating provide method for solving dependence in symptoms problem. It builds on degree of dependency between symptoms with consequence of raising the efficiency and accuracy of the diagnosis. The idea is to transform raw symptoms of each disease into independent groups. Furthermore, decision making under uncertainty is the aim of our system that is able to achieve optimal medical diagnosis together with swarm technique and holonic paradigm without assuming independence of symptoms; whereas, independence of symptoms is the central and critical assumption in Bayes' theorem. Additional factors that play an important role are the required time for the decision process and the reduced costs. © 2010 IEEE.view abstract doi: 10.1109/INCOS.2010.34 2010 • 26 **Holonic and optimal medical decision making under uncertainty**

Al-Qaysi, I. and Othman, Z. and Unland, R. and Weihs, C. and Branki, C.*Proceedings of 2010 IEEE EMBS Conference on Biomedical Engineering and Sciences, IECBES 2010*295-299 (2010)Holonic multi agent medical diagnosis system combines the advantages of the holonic paradigm, multi agent system technology, and swarm intelligence in order to realize a highly reliable, adaptive, scalable, flexible, and robust Internet based diagnosis system for diseases. This paper concentrate on the decision process within our system and will present our ideas, which are based on decision theory, and here, especially, on Bayesian probability since, among others, uncertainty is inherent feature of a medical diagnosis process. The presented approach focuses on reaching the optimal medical diagnosis with the minimum risk under the given uncertainty. Additional factors that play an important role are the required time for the decision process and the produced costs. © 2010 IEEE.view abstract doi: 10.1109/IECBES.2010.5742247 2010 • 25 **Sequential parameter optimization of an evolution strategy for the design of mold temperature control systems**

Biermann, D. and Joliet, R. and Michelitsch, T. and Wagner, T.*2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010*(2010)Sequential Parameter Optimization (SPO) is a popular model-assisted approach for tuning the parameters of metaheuristics, which is based on models from the Design and Analysis of Computer Experiments (DACE). Despite the indisputable success of SPO, some of the assumptions behind DACE, such as deterministic output and stationary covariance, do not hold for parameter optimization. Thus, an analysis of enhanced covariance kernels for the consideration of noise is performed. Furthermore, the effects of different sequential sampling strategies and an increasing number of replicates of each design on the quality of the models are discussed. To accomplish this, an Evolution Strategy (ES) is tuned on the real-world optimization problem of designing Mold Temperature Control Systems. Based on the results, recommendations for the ES parameters are provided, insights about the robustness of DACE with respect to the violations are made, and recommendations for appropriate combinations of sampling strategies and covariance kernels are derived. © 2010 IEEE.view abstract doi: 10.1109/CEC.2010.5586314 2010 • 24 **Design approaches for wire robots**

Bruckmann, T. and Mikelsons, L. and Brandt, T. and Hiller, M. and Schramm, D.*Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference 2009, DETC2009*7 25-34 (2010)Wire robots consist of a movable end-effector which is connected to the machine frame by motor driven wires. Since wires can transmit only tension, positive wire forces have to be ensured. During workspace analysis, the wires forces need to be calculated. Discrete methods do not produce satisfying results, since intermediate points on the discrete calculation grids are neglected. Using intervals instead of points leads to reliable results. Formulating the analysis problem as a Constraint-Satisfaction-Problem (CSP) allows convenient transition to the synthesis problem, i.e. to find suitable designs for practical applications. In this paper, two synthesis approaches are employed: Design-to-Workspace (i.e. calculation of an optimal robot layout for a given workspace) and an extension called Design-to-Task (i.e. calculation of the optimal robot for a specific task). To solve these optimization problems, the paper presents approaches to combine the reliability and robustness of interval-based computations with the effectiveness of available optimizer implementations. Copyright © 2009 by ASME.view abstract doi: 10.1115/DETC2009-86720 2010 • 23 **An active suspension system for simulation of ship maneuvers in wind tunnels**

Bruckmann, T. and Hiller, M. and Schramm, D.*Mechanisms and Machine Science*5 537-544 (2010)Wind tunnels are an experimental tool to evaluate the air flow properties of vehicles in model scale and to optimize the design of aircrafts and aircraft components. Also the hydrodynamic properties of marine components like ship hulls or propulsion systems can be examined. For advanced optimization, it is necessary to guide the models along defined trajectories during the tests to vary the angle of attack. Due to their good aerodynamical properties, parallel wire robots were successfully used to perform these maneuvers in wind tunnels. Compared to aircraft hulls, marine models may be very heavy-weight (up to 150 kg). Thus, the suspension system must be very stiff to avoid vibrations. Additionally, fast maneuvers require powerful drives. On the other hand, the positioning system should not influence the air flow to ensure unaltered experimental results. In this paper, different designs are presented and discussed. © Springer Science+Business Media B.V. 2010.view abstract doi: 10.1007/978-90-481-9689-0_62 2010 • 22 **Optimization of mesh-based anodes for direct methanol fuel cells**

Chetty, R. and Scott, K. and Kundu, S. and Muhler, M.*Journal of Fuel Cell Science and Technology*7 0310101-0310119 (2010)Platinum based binary and ternary catalysts were prepared by thermal decomposition onto a titanium mesh and were evaluated for the anodic oxidation of methanol. The binary Pt:Ru catalyst with a composition of 1:1 gave the highest performance for methanol oxidation at 80° C. The effect of temperature and time for thermal decomposition was optimized with respect to methanol oxidation, and the catalysts were characterized by cyclic voltammetry, linear sweep voltammetry, scanning electron microscopy, X-ray diffraction studies, and X-ray photoelectron spectroscopy. The best catalyst was evaluated in a single fuel cell, and the effect of methanol concentration, temperature, and oxygen/air flow was studied. The mesh-based fuel cell, operating at 80°C with 1 mol dm 3 methanol, gave maximum power densities of 38 mWcm -2 and 22 mWcm -2 with 1 bar (gauge) oxygen and air, respectively. © 2010 by ASME.view abstract doi: 10.1115/1.3117605 2010 • 21 **On a bilinear optimization problem in parallel magnetic resonance imaging**

Clason, C. and von Winckel, G.*Applied Mathematics and Computation*216 1443-1452 (2010)This work is concerned with the structure of bilinear minimization problems arising in recovering sub-sampled and modulated images in parallel magnetic resonance imaging. By considering a physically reasonable simplified model exhibiting the same fundamental mathematical difficulties, it is shown that such problems suffer from poor gradient scaling and non-convexity, which causes standard optimization methods to perform inadequately. A globalized quasi-Newton method is proposed which is able to reconstruct both image and the unknown modulations without additional a priori information. Thus the present paper serves as a first contribution toward understanding and solving such bilinear optimization problems. © 2010 Elsevier Inc. All rights reserved.view abstract doi: 10.1016/j.amc.2010.02.047 2010 • 20 **Medical optimal decision making based holonic multi agent system**

Esra, A. and Reiner, U. and Weihs, C. and Branki, C.*2010 The 2nd International Conference on Computer and Automation Engineering, ICCAE 2010*3 392-396 (2010)This paper concentrates on the decision process based on multi-agent system theory; the holonic paradigm, and swarm intelligence techniques, Bayesian probability since, among others, uncertainty is an inherent feature of a medical diagnostic process with highly reliable results. The presented approach focuses on reaching the optimal medical diagnosis with the minimum risk under the given uncertainty. Additional factors that play an important role are the required time for the decision process and the produced costs. ©2010 IEEE.view abstract doi: 10.1109/ICCAE.2010.5451382 2010 • 19 **Adaptation and focusing of optode configurations for fluorescence optical tomography by experimental design methods**

Freiberger, M. and Clason, C. and Scharfetter, H.*Journal of Biomedical Optics*15 (2010)Fluorescence tomography excites a fluorophore inside a sample by light sources on the surface. From boundary measurements of the fluorescent light, the distribution of the fluorophore is reconstructed. The optode placement determines the quality of the reconstructions in terms of, e.g., resolution and contrast-to-noise ratio. We address the adaptation of the measurement setup. The redundancy of the measurements is chosen as a quality criterion for the optodes and is computed from the Jacobian of the mathematical formulation of light propagation. The algorithm finds a subset with minimum redundancy in the measurements from a feasible pool of optodes. This allows biasing the design in order to favor reconstruction results inside a given region. Two different variations of the algorithm, based on geometric and arithmetic averaging, are compared. Both deliver similar optode configurations. The arithmetic averaging is slightly more stable, whereas the geometric averaging approach shows a better conditioning of the sensitivity matrix and mathematically corresponds more closely with entropy optimization. Adapted illumination and detector patterns are presented for an initial set of 96 optodes placed on a cylinder with focusing on different regions. Examples for the attenuation of fluorophore signals from regions outside the focus are given. © 2010 Society of Photo-Optical Instrumentation Engineers.view abstract doi: 10.1117/1.3316405 2010 • 18 **Total variation regularization for nonlinear fluorescence tomography with an augmented Lagrangian splitting approach**

Freiberger, M. and Clason, C. and Scharfetter, H.*Applied Optics*49 3741-3747 (2010)Fluorescence tomography is an imaging modality that seeks to reconstruct the distribution of fluorescent dyes inside a highly scattering sample from light measurements on the boundary. Using common inversion methods with L 2 penalties typically leads to smooth reconstructions, which degrades the obtainable resolution. The use of total variation (TV) regularization for the inverse model is investigated. To solve the inverse problem efficiently, an augmented Lagrange method is utilized that allows separating the Gauss-Newton minimization from the TV minimization. Results on noisy simulation data provide evidence that the reconstructed inclusions are much better localized and that their half-width measure decreases by at least 25% compared to ordinary L p2 reconstructions. © 2010 Optical Society of America.view abstract doi: 10.1364/AO.49.003741 2010 • 17 **Numerical material flow optimization of a multi-hole extrusion process**

Kloppenborg, T. and Brosius, A. and Tekkaya, A.E.*Advanced Materials Research*83-86 826-833 (2010)The decrease of the bearing length in the aluminum extrusion processes results in an increase of the material flow and offers, through this, the possibility for correction and optimization. This study presents a simulation-based optimization technique which uses this effect for optimizing the material flow in a direct multi-hole extrusion process. First the extrusion process was numerically calculated to simulate the production of three rectangular profiles with equal cross sections. Here, the die orifices were arranged at various distances to the die centre, which lead to different profile exit speeds. Based on the initial numerical calculation, an automated optimization of the bearing length with the adaptive-response-surface-method was set up to achieve uniform exit speeds for all profiles. Finally, an experimental verification carried out to show the influence of the optimized die design. © (2010) Trans Tech Publications.view abstract doi: 10.4028/www.scientific.net/AMR.83-86.826 2010 • 16 **Desirability-based multi-criteria optimisation of HVOF spray experiments**

Kopp, G. and Baumann, I. and Vogli, E. and Tillmann, W. and Weihs, C.*Studies in Classification, Data Analysis, and Knowledge Organization*811-818 (2010)The reduction of the powder grain size is of key interest in the thermal spray technology to produce superfine structured cermet coatings. Due to the low specific weight and a high thermal susceptibility of such fine powders, the use of appropriate process technologies and optimised process settings are required. Experimental design and the desirability index are employed to find optimal settings of a high velocity oxygen fuel (HVOF) spraying process using fine powders (2-8μm). The independent factors kerosene, hydrogen, oxygen, gun velocity, stand-off distance, cooling pressure, carrier gas and disc velocity are considered in a 12-run Plackett-Burman Design, and their effects on the deposition efficiency and on the coating characteristics microhardness, porosity and roughness are estimated. Following an examination of possible 2-way interactions in a 25-1 fractional-factorial design, the three most relevant factors are analysed in a central composite design. Derringer's desirability function and the desirability index are applied to find optimal factor settings with respect to the above characteristics. All analyses are carried out with the statistics software "R". The optimisation of the desirability index is done using the R-package "desiRe". © 2010 Springer-Verlag Berlin Heidelberg.view abstract doi: 10.1007/978-3-642-10745-0-90 2010 • 15 **A priori error analysis for linear quadratic elliptic neumann boundary control problems with control and State Constraints**

Krumbiegel, K. and Meyer, C. and Rösch, A.*SIAM Journal on Control and Optimization*48 5108-5142 (2010)In this paper we consider a state-constrained optimal control problem with boundary control, where the state constraints are imposed only in an interior subdomain. Our goal is to derive a priori error estimates for a finite element discretization with and without additional regularization. We will show that the separation of the set where the control acts and the set where the state constraints are given improves the approximation rates significantly. The theoretical results are illustrated by numerical computations. © 2010 Society for Industrial and Applied Mathematics.view abstract doi: 10.1137/090746148 2010 • 14 **Nonlinear reaction coordinate analysis in the reweighted path ensemble**

Lechner, W. and Rogal, J. and Juraszek, J. and Ensing, B. and Bolhuis, P.G.*Journal of Chemical Physics*133 (2010)We present a flexible nonlinear reaction coordinate analysis method for the transition path ensemble based on the likelihood maximization approach developed by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)]. By parametrizing the reaction coordinate by a string of images in a collective variable space, we can optimize the likelihood that the string correctly models the committor data obtained from a path sampling simulation. The collective variable space with the maximum likelihood is considered to contain the best description of the reaction. The use of the reweighted path ensemble [J. Rogal, J. Chem. Phys. 133, 174109 (2010)] allows a complete reaction coordinate description from the initial to the final state. We illustrate the method on a z-shaped two-dimensional potential. While developed for use with path sampling, this analysis method can also be applied to regular molecular dynamics trajectories. © 2010 American Institute of Physics.view abstract doi: 10.1063/1.3491818 2010 • 13 **Plane-wave implementation of the real-space k ṡ p formalism and continuum elasticity theory**

Marquardt, O. and Boeck, S. and Freysoldt, C. and Hickel, T. and Neugebauer, J.*Computer Physics Communications*181 765-771 (2010)In this work we demonstrate how second-order continuum elasticity theory and an eight-band k ṡ p model can be implemented in an existing density functional theory (DFT) plane-wave code. The plane-wave formulation of these two formalisms allows for an accurate and efficient description of elastic and electronic properties of semiconductor nanostructures such as quantum dots, wires, and films. Gradient operators that are computationally expensive in a real-space formulation can be calculated much more efficiently in reciprocal space. The accuracy can be directly controlled by the plane-wave cutoff. Furthermore, minimization schemes typically available in plane-wave DFT codes can be applied straightforwardly with only a few modifications to a plane-wave formulation of these continuum models. As an example, the elastic and electronic properties of a III-nitride quantum dot system are calculated. © 2009 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.cpc.2009.12.009 2010 • 12 **Intentionally positioned self-assembled InAs quantum dots in an electroluminescent pin junction diode**

Mehta, M. and Reuter, D. and Melnikov, A. and Wieck, A.D. and Michaelis De Vasconcellos, S. and Baumgarten, T. and Zrenner, A. and Meier, C.*Physica E: Low-Dimensional Systems and Nanostructures*42 2749-2752 (2010)An intentional positioning of optically active quantum dots using site-selective growth by a combination of molecular beam epitaxy (MBE) and focused ion beam (FIB) implantation in an all-ultra-high-vacuum (UHV) setup has been successfully demonstrated. A square array of periodic holes on GaAs substrate was fabricated with FIB of 30 keV Ga ions followed by an in situ annealing step. Subsequently, the patterned holes were overgrown with an optimized amount of InAs in order to achieve site-selective growth of the QDs on the patterned holes. Under well-optimized conditions, a selectivity of single quantum dot growth in the patterned holes of 52% was achieved. Thereafter, carrier injection and subsequent radiative recombination from the positioned InAs/GaAs self-assembled QDs was investigated by embedding the QDs in the intrinsic part of a GaAs-based pin junction device. Electroluminescence spectra taken at 77 K show interband transitions up to the fifth excited state from the QDs. © 2009 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.physe.2009.12.053 2010 • 11 **Enhancing ubiquitous systems through system call mining**

Morik, K. and Jungermann, F. and Piatkowski, N. and Engel, M.*Proceedings - IEEE International Conference on Data Mining, ICDM*1338-1345 (2010)Collecting, monitoring, and analyzing data automatically by well instrumented systems is frequently motivated by human decision-making. However, the same need occurs when system software decisions are to be justified. Compiler optimization or storage management requires several decisions which result in more or less resource consumption, be it energy, memory, or runtime. A magnitude of system data can be collected in order to base decisions of compilers or the operating system on empirical analysis. The challenge of large-scale data is aggravated if system data of small and often mobile systems are collected and analyzed. In contrast to the large data volume, the mobile devices offer only very limited storage and computing capacity. Moreover, if analysis results are put to use at the operating system, the real-time response is at the system level, not on the level of human reaction time. In this paper, small and most often mobile systems (i.e., ubiquitous systems) are instrumented for the collection of system call data. It is investigated whether the sequence and the structure of system calls are to be taken into account by the learning method, or not. A structural learning method, Conditional Random Fields (CRF), is applied using different internal optimization algorithms and feature mappings. Implementing CRF in a massively parallel way using general purpose graphic processor units (GPGPU) points at future ubiquitous systems. © 2010 IEEE.view abstract doi: 10.1109/ICDMW.2010.133 2010 • 10 **Variationally consistent modeling of finite strain plasticity theory with non-linear kinematic hardening**

Mosler, J.*Computer Methods in Applied Mechanics and Engineering*199 2753-2764 (2010)Variational constitutive updates provide a physically and mathematically sound framework for the numerical implementation of material models. In contrast to conventional schemes such as the return-mapping algorithm, they are directly and naturally based on the underlying variational principle. Hence, the resulting numerical scheme inherits all properties of that principle. In the present paper, focus is on a certain class of those variational methods which relies on energy minimization. Consequently, the algorithmic formulation is governed by energy minimization as well. Accordingly, standard optimization algorithms can be applied to solve the numerical problem. A further advantage compared to conventional approaches is the existence of a natural distance (semi metric) induced by the minimization principle. Such a distance is the foundation for error estimation and as a result, for adaptive finite elements methods. Though variational constitutive updates are relatively well developed for so-called standard dissipative solids, i.e., solids characterized by the normality rule, the more general case, i.e., generalized standard materials, is far from being understood. More precisely, (Int. J. Sol. Struct. 2009, 46:1676-1684) represents the first step towards this goal. In the present paper, a variational constitutive update suitable for a class of nonlinear kinematic hardening models at finite strains is presented. Two different prototypes of Armstrong-Frederick-type are re-formulated into the aforementioned variationally consistent framework. Numerical tests demonstrate the consistency of the resulting implementation. © 2010 Elsevier B.V.view abstract doi: 10.1016/j.cma.2010.03.025 2010 • 9 **On the implementation of rate-independent standard dissipative solids at finite strain - Variational constitutive updates**

Mosler, J. and Bruhns, O.T.*Computer Methods in Applied Mechanics and Engineering*199 417-429 (2010)This paper is concerned with an efficient, variationally consistent, implementation for rate-independent dissipative solids at finite strain. More precisely, focus is on finite strain plasticity theory based on a multiplicative decomposition of the deformation gradient. Adopting the formalism of standard dissipative solids which allows to describe constitutive models by means of only two potentials being the Helmholtz energy and the yield function (or equivalently, a dissipation functional), finite strain plasticity is recast into an equivalent minimization problem. In contrast to previous models, the presented framework covers isotropic and kinematic hardening as well as isotropic and anisotropic elasticity and yield functions. Based on this approach a novel numerical implementation representing the main contribution of the paper is given. In sharp contrast to by now classical approaches such as the return-mapping algorithm and analogously to the theoretical part, the numerical formulation is variationally consistent, i.e., all unknown variables follow naturally from minimizing the energy of the considered system. Consequently, several different numerically efficient and robust optimization schemes can be directly employed for solving the resulting minimization problem. Extending previously published works on variational constitutive updates, the advocated model does not rely on any material symmetry and therefore, it can be applied to a broad range of different plasticity theories. As two examples, an anisotropic Hill-type and a Barlat-type model are implemented. Numerical examples demonstrate the applicability and the performance of the proposed implementation. © 2009 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.cma.2009.07.006 2010 • 8 **Evaluation of factors influencing deep cryogenic treatment that affect the properties of tool steels**

Oppenkowski, A. and Weber, S. and Theisen, W.*Journal of Materials Processing Technology*210 1949-1955 (2010)Deep cryogenic treatment (DCT) of tool steels is used as an additive process to conventional heat treatment and usually involves cooling the material to liquid nitrogen temperature (-196 °C). This kind of treatment has been reported to improve the wear resistance of tools. In this study, the Taguchi method was used to identify the main factors of DCT that influence the mechanical properties and the wear resistance of the powder metallurgically produced cold-work tool steel X153CrVMo12 (AISI D2). Factors investigated were the austenitizing temperature, cooling rate, holding time, heating rate, and tempering temperature. In order to study the significance of these factors and the effect of possible two-factor interactions L27(313), an orthogonal array (OA) was applied to conduct several heat treatments, including a single DCT cycle directly after quenching prior to tempering. The results show that the most significant factors influencing the properties of tool steels are the austenitizing and tempering temperatures. In contrast, the parameters of deep cryogenic treatment exhibit a lower level of significance. Further investigations identified a nearly constant wear rate for holding times of up to 24 h. The wear rate reaches a minimum for a longer holding time of 36 h and increases again with further holding. © 2010 Elsevier B.V. All rights reserved.view abstract doi: 10.1016/j.jmatprotec.2010.07.007 2010 • 7 **Optimized dynamical decoupling for power-law noise spectra**

Pasini, S. and Uhrig, G.S.*Physical Review A - Atomic, Molecular, and Optical Physics*81 (2010)We analyze the suppression of decoherence by means of dynamical decoupling in the pure-dephasing spin-boson model for baths with power law spectra. The sequence of ideal π pulses is optimized according to the power of the bath. We expand the decoherence function and separate the canceling divergences from the relevant terms. The proposed sequence is chosen to be the one minimizing the decoherence function. By construction, it provides the best performance. We analytically derive the conditions that must be satisfied. The resulting equations are solved numerically. The solutions are very close to the Carr-Purcell-Meiboom-Gill sequence for a soft cutoff of the bath while they approach the Uhrig dynamical-decoupling sequence as the cutoff becomes harder. © 2010 The American Physical Society.view abstract doi: 10.1103/PhysRevA.81.012309 2010 • 6 **Toward process optimization in laser welding of metal to polymer**

Tillmann, W. and Elrefaey, A. and Wojarski, L.*Materialwissenschaft und Werkstofftechnik*41 879-883 (2010)The joining technology of dissimilar lightweight materials between metals and polymer is essential for realizing cars with hybrid structures and for other engineering applications. These types of joints are still difficult to generate and their behaviour is not fully understood. Laser welding offers specific process advantages over conventional technologies, such as short process times, while providing optically and qualitatively valuable weld seams and imposing minimal thermal stress. Furthermore, the process is compatible with automation. This paper summarizes the efforts to attain suitable joint strengths with the stainless steel plate type S30400 and a Polyethylene Terephtalate Glycol (PETG) plastic sheet. The study considers the optimization of two important process parameters, namely laser power, and welding speed. Microstructure features, test of tensile shear strength, investigation of the fracture location, and morphology were used to evaluate the joint performance. The result indicates that there is an optimum value for laser power, which achieves a sufficient melting and heat transfer to the joint without decomposing the plastic sheet and hence, enables to obtain high joint strength. Moreover, a low welding speed is preferable in most combinations of welding parameters since it achieves an adequate melting and wetting of the polymer to the steel surface. Copyright 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.view abstract doi: 10.1002/mawe.201000674 2010 • 5 **Desirability-based multi-criteria optimization of HVOF spray experiments to manufacture fine structured wear-resistant 75Cr 3C 2-25(NiCr20) coatings**

Tillmann, W. and Vogli, E. and Baumann, I. and Kopp, G. and Weihs, C.*Journal of Thermal Spray Technology*19 392-408 (2010)Thermal spraying of fine feedstock powders allow the deposition of cermet coatings with significantly improved characteristics and is currently of great interest in science and industry. However, due to the high surface to volume ratio and the low specific weight, fine particles are not only difficult to spray but also show a poor flowability in the feeding process. In order to process fine powders reliably and to preserve the fine structure of the feedstock material in the final coating morphology, the use of novel thermal spray equipment as well as a thorough selection and optimization of the process parameters are fundamentally required. In this study, HVOF spray experiments have been conducted to manufacture fine structured, wear-resistant cermet coatings using fine 75Cr 3C 2-25(Ni20Cr) powders (-8 + 2 μm). Statistical design of experiments (DOE) has been utilized to identify the most relevant process parameters with their linear, quadratic and interaction effects using Plackett-Burman, Fractional-Factorial and Central Composite designs to model the deposition efficiency of the process and the majorly important coating properties: roughness, hardness and porosity. The concept of desirability functions and the desirability index have been applied to combine these response variables in order to find a process parameter combination that yields either optimum results for all responses, or at least the best possible compromise. Verification experiments in the so found optimum obtained very satisfying or even excellent results. The coatings featured an average microhardness of 1004 HV 0.1, a roughness Ra = 1.9 μm and a porosity of 1.7%. In addition, a high deposition efficiency of 71% could be obtained. © 2009 ASM International.view abstract doi: 10.1007/s11666-009-9383-5 2010 • 4 **Numerical simulation and benchmarking of a monolithic multigrid solver for fluid-structure interaction problems with application to hemodynamics**

Turek, S. and Hron, J. and Mádlík, M. and Razzaq, M. and Wobker, H. and Acker, J.F.*Lecture Notes in Computational Science and Engineering*73 LNCSE 193-220 (2010)An Arbitrary Lagrangian-Eulerian (ALE) formulation is applied in a fully coupled monolithic way, considering the fluid-structure interaction (FSI) problem as one continuum. The mathematical description and the numerical schemes are designed in such a way that general constitutive relations (which are realistic for biomechanics applications) for the fluid as well as for the structural part can be easily incorporated. We utilize the LBB-stable finite element pairs Q 2 P 1 and P 2 + P 1 for discretization in space to gain high accuracy and perform as time-stepping the 2nd order Crank-Nicholson, respectively, a new modified Fractional-Step-θ-scheme for both solid and fluid parts. The resulting discretized nonlinear algebraic system is solved by a Newton method which approximates the Jacobian matrices by a divided differences approach, and the resulting linear systems are solved by direct or iterative solvers, preferably of Krylov-multigrid type. For validation and evaluation of the accuracy and performance of the proposed methodology, we present corresponding results for a new set of FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in laminar channel flow, allowing stationary as well as periodically oscillating deformations. Then, as an example of FSI in biomedical problems, the influence of endovascular stent implantation on cerebral aneurysm hemodynamics is numerically investigated. The aim is to study the interaction of the elastic walls of the aneurysm with the geometrical shape of the implanted stent structure for prototypical 2D configurations. This study can be seen as a basic step towards the understanding of the resulting complex flow phenomena so that in future aneurysm rupture shall be suppressed by an optimal setting of the implanted stent geometry. © 2011 Springer.view abstract doi: 10.1007/978-3-642-14206-2_8 2010 • 3 **Rigorous bounds for optimal dynamical decoupling**

Uhrig, G.S. and Lidar, D.A.*Physical Review A - Atomic, Molecular, and Optical Physics*82 (2010)We present rigorous performance bounds for the optimal dynamical decoupling pulse sequence protecting a quantum bit (qubit) against pure dephasing. Our bounds apply under the assumption of instantaneous pulses and of bounded perturbing environment and qubit-environment Hamiltonians such as those realized by baths of nuclear spins in quantum dots. We show that if the total sequence time is fixed the optimal sequence can be used to make the distance between the protected and unperturbed qubit states arbitrarily small in the number of applied pulses. If, on the other hand, the minimum pulse interval is fixed and the total sequence time is allowed to scale with the number of pulses, then longer sequences need not always be advantageous. The rigorous bound may serve as a testbed for approximate treatments of optimal decoupling in bounded models of decoherence. © 2010 The American Physical Society.view abstract doi: 10.1103/PhysRevA.82.012301 2010 • 2 **Efficient coherent control by sequences of pulses of finite duration**

Uhrig, G.S. and Pasini, S.*New Journal of Physics*12 (2010)Reliable long-time storage of arbitrary quantum states is a key element of quantum information processing. In order to dynamically decouple a spin or quantum bit from a dephasing environment by non-instantaneous pulses, we introduce an optimized sequence of N control π pulses that are realistic in the sense that they have a finite duration and a finite amplitude. We show that optimized dynamical decoupling is still applicable and that higher-order decoupling can be reached if shaped pulses are implemented. The sequence suppresses decoherence up to the order script O sign (TN+1) + script O sign (τmx M), with T being the total duration of the sequence and τmx the maximum length of the pulses. The exponent Mεℕ depends on the shape of the pulse. Based on existing experiments, a concrete setup for the verification of the properties of the advocated sequence is proposed. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.view abstract doi: 10.1088/1367-2630/12/4/045001 2010 • 1 **Identification of optimized Ti-Ni-Cu shape memory alloy compositions for high-frequency thin film microactuator applications**

Zarnetta, R. and Ehmann, M. and Savan, A. and Ludwig, Al.*Smart Materials and Structures*19 (2010)Ti-Ni-Cu shape memory thin films within a broad composition range were investigated by the cantilever deflection method using combinatorial methods. Optimal compositions with improved functional properties, i.e.large recovery stress, high transformation temperatures, low thermal hysteresis width and small temperature interval of transformation, were identified using a newly defined figure of merit. Of the investigated alloys, Ti50Ni 41Cu9 and Ti45Ni46Cu9 exhibit the best shape memory properties for compositions showing a B2 → B19 and a B2 → R-phase transformation, respectively. © 2010 IOP Publishing Ltd.view abstract doi: 10.1088/0964-1726/19/6/065032