Browsing by Author "Peng, Jiming"
Now showing 1 - 20 of 24
- Results Per Page
- Sort Options
Item An Optimization Framework for Resilience-based Power Grid Restoration(2018-08) Abbasi, Saeedeh; Lim, Gino J.; Lee, Taewoo; Peng, Jiming; Barati, Masoud; Vipulanandan, CumaraswamyPower outage is a terrible consequence of an extreme event that affects a wide range of consumers including homes, hospitals, and commercial industries. An extreme event such as a hurricane, windstorm or earthquake can disrupt power grids located in open areas. In a power grid, transmission lines are the most vulnerable equipment and their damage usually results in a cascading failure of the whole network. Although a power system should be strengthened in advance to withstand these events, having a plan to restore the failed power grid is essential. Emergency generation units play an important role in a restoration process; these pre-located units are called black start (BS) units. The restoration process with BS units is conducted through a parallel restoration over independent sections within a network. Appropriate sectionalization provides a more resilient power system against a long outage. Assessing and optimizing the resilience of a power system could improve the quality of the restoration process. To achieve this resilient power system, a mathematical model is presented to maximize the system’s resiliency while planning a restoration process. The system resiliency is measured through an innovative resilience vector. As a result, the restoration would be performed quickly to satisfy all critical demands. The model is a mixed integer programming (MIP), which is decomposed to a bi-level model where it can be solved in the lower complexity. Rather than the bi-level programming, a mathematical programming with equilibrium constraints (MPEC) approach is applied to solve the model. The comparison between the results of both methods demonstrates the high efficiency of bi-level programming solution methodology in a large-scale case. A pre-emptive goal programming (PEP) method also supports the solution methodologies to take care of multiple terms with different scales and priorities in the objective function of the model. The model is analyzed by 6- and 118-bus IEEE standard test systems. Sectionalization of a transmission network has a close association with partitioning a graph (i.e., the lines are considered as edges and grid buses are the same as graph nodes). The graph partitioning problem (GPP) is formulated as a MIP model to minimize the amount of dis-joint edges so that well connected sections can be formed. Therefore, the proposed restoration model is combined with the GPP model. In this order, the sectionalization constraints are replaced with GPP constraints while the GPP objective is added to the model’s objective. The new GPP-based restoration model is examined for both 6- and 118-bus case studies and the results are compared with the first sectionalization approach. The analysis of advantages and disadvantages for the first and second restoration models is currently under work. Both proposed deterministic models are solved under the assumption of a given status for the transmission network after disruption; however, it is rarely possible to have a precise prediction on the post-status of a transmission network following an extreme weather. Hence, the post-status of transmission lines can be considered as a source of uncertainty. In this study, a robust optimization model is provided to take care of this uncertainty. The proposed robust model is a scenario-based model of the GPP-based model. These scenarios are prepared based on simulated hurricane wind speeds and the fragility profile of transmission lines. Furthermore, the worst-case model against all realization of the grid post-status is provided. The result on 118-bus test system gives a reliable solution for all realization of the scenarios with a narrow band in objectives performance measures. Dealing with large network-structured systems such as a power system is difficult. For this reason, a parallel processing is recommended by partitioning the network; which can facilitate the process by reducing the size of the target network at each moment. The common partitioning criterion is modularity while considering another metric beside it is beneficial to the result. The final chapter of the dissertation addresses the undirected network partitioning challenges in the vulnerability of the partitions via maximization of edge-connectivity and modularity. The edge-connectivity is a graph metric, which represents the robustness of the sub-networks and its optimization, enhance the robustness of the partitions. The problem is formulated as a bi-objective maximization model. The results on multiple random test cases of different sizes are analyzed to demonstrate the model’s performance.Item Analog-To-Digital Data Converters in Bulk CMOS for Harsh Radiation and High Temperature Environment Applications(2019-12) Vosooghi, Bozorgmehr; Chen, Jinghong; Zagozdzon-Wosik, Wanda; Chen, Jiefu; Fu, Xin; Peng, JimingThis dissertation focuses on analog-to-digital data converters for harsh environments. There is an increasing demand for reliable high temperature electronics implemented in low-cost bulk CMOS technologies. First, a detailed study of high temperature impairments is presented. An analytical model for small signal and large signal for the high temperature operation of MOSFET devices in 0.13 µm bulk CMOS process is then presented. A prototype has been made in 0.13 µm bulk CMOS. The I-V transfer characteristics have been measured at different temperatures from 25 °C to 200 °C. The experimental results are compared with the simulation results from the developed model to verify the reliability of the EDA tools at high temperature. Second, a scattered temperature sensor is presented that is employed in a high temperature ADC. The main challenges of on-chip CMOS temperature sensor front-ends are addressed and several techniques and architectures are proposed to improve performance. A prototype has been made in a 0.18 µm CMOS process. The measurement results are presented, which demonstrate good accuracy and performance. A temperature sensor is also used for the compensation method proposed in the continuous time sigma delta modulator. Third, a robust high bandwidth continuous time (CT) sigma delta ADC in 0.18 µm bulk CMOS technology for high temperature applications is presented. In order to enable operation in the intended application environment, a compensation method consisting of a temperature sensor has been proposed to compensate for the reduction. By sensing the temperature, the effective gm of the integrator is increased by stepping up the size of the input pair and the tail current of the operational amplifier. Simulations has been conducted in order to verify the validity of the proposed techniques. Finally, a radiation-hardened 10-bit 25 MS/s SAR ADC for harsh environments is presented. Different techniques have been studied. A Triple Modular Redundancy (TMR) technique has been used. The radiation-hardened ADC is implemented in 0.18 µm bulk CMOS. It achieves an ENOB of 9.4 bits under SEE.Item Analytics Approaches to the Development of Diabetic Retinopathy Screening Policies(2023-08) Dorali, Poria; Lim, Gino J.; Lee, Taewoo; Lin, Ying; Weng, Christina Y.; Deshmukh, Ashish A.; Peng, JimingDiabetic retinopathy (DR) is the leading cause of blindness for working-age adults in the US. Over 60% of patients with type II diabetes and 90% of patients with type I diabetes develop DR within 20 years of diagnosis. Routine comprehensive screening examinations have proved effective in detecting early stages of DR and timely treatment can prevent up to 98% of DR-related vision loss. However, only 50-60% of diabetic patients adhere to the current annual screening guidelines. Recently, teleretinal imaging (TRI) has emerged as an accessible screening tool for patients with limited access. However, there exists no well-established guideline that incorporates TRI-based screening for such patients. In this thesis, we study a multi-pronged analytics approach to quantify and evaluate the advantages and limitations of TRI compared with traditional clinic-based screening (CS) and propose new screening policies for patients with limited access to eye care. First, we develop a simulation model that examines the health and cost benefits of various routine CS and TRI-based DR screening policies at different time intervals for various types of diabetic patients. Additionally, we identify patient subgroups who would truly benefit from TRI in terms of health benefits and cost savings. Second, we develop a partially observable Markov decision process (POMDP) model to generate personalized DR screening recommendations that exploit the dynamic interaction of TRI and traditional screening based on each patient’s unique health-related and behavioral factors. Lastly, we develop a decision tree model that establishes interpretable DR screening policies by transforming the complex, POMDP-driven personalized screening policies into policies that are more explainable, implementable, and adoptable in clinical practice.Item Architectural Approaches to Design Reliable and Energy-Efficient GPUs(2016-05) Tan, Jingweijia; Fu, Xin; Chen, Jinghong; Chen, Yuhua; Peng, Jiming; Chen, Guoning; Song, Shuaiwen LeonModern graphic processing units (GPUs) support thousands of concurrent threads and provide high computational throughput, which makes them popular platforms for general-purpose high-performance computing (HPC) applications. However this raises reliability and energy-efficiency challenges in GPU architecture design. Originally designed for graphics applications with relaxed requirements on execution correctness, GPUs lack the error detection and fault tolerance features. In contrast, HPC programs have rigorous demands on execution correctness, which poses serious reliability challenges for general purpose computing on GPUs (GPGPUs). In addition, GPUs consume large amount of energy to achieve its high computing power. The peak power consumption of a high-end GPU is more than twice of the CPU counterparts and the energy-efficiency of GPUs fail to grow as fast as the performance improvement. In this dissertation, we introduce several architectural approaches to design reliable and energy-efficient GPUs. We first propose several opportunistic techniques to recycle the idle time of streaming processors for soft-error detection and obtain the good fault coverage with negligible performance degradation. Utilizing the promising benefits of resistive memory, we further propose to leverage resistive memory to enhance the soft-error robustness and reduce the power consumption of registers in the GPUs. We then explore to mitigate the susceptibility of GPU register file to process variations. The proposed techniques are able to significantly optimize GPUs' performance under process variations. After that, we propose an effective and low-cost mechanism to maintain the register file reliability with negligible performance loss under process variations and low supply voltages, which enables substantial energy savings via aggressive supply voltage reduction. Finally, we propose an energy-efficient GPU L2 cache design that leverages locality similarity to reduce the L2 energy consumption with negligible performance degradation. Overall, these techniques efficiently address the reliability and energy-efficient challenges in GPU architectures.Item DESIGN AND SIMULATION OF HIGH SPEED LOW-POWER DUAL-MODE (NRZ/PAM4 12.8Gbps/25.6Gbps) SERIALIZER AND LASER DRIVER IN TSMC 65nm TECHNOLOGY(2017-05) Pendyala, Praveen Gayatree; Chen, Jinghong; Peng, Jiming; Pei, Shin-Shem StevenThis thesis presents the design and simulation of the schematic of a low-power (5.6pJ/b) dual-mode (12.8 Gbps NRZ, 25.6 Gbps PAM4) serializer with driver to be used in high-speed serial link transmitter application-specific integrated circuit (ASIC) to be employed in High-Energy Physics (HEP) experiments. The serializer and driver are being designed in a 65 nm CMOS technology. The ASIC itself will mainly include an LC-VCO phase-locked-loop (PLL), a 32:2 serializer and a CML driver. The driver also employs FIR pre-emphasis using a 2-bit programmable buffer delay chain. The serializer, driver and pre-emphasis are designed based on a combination of architectures presented in literature. A VCSEL model based on literature is designed using cadence schematic. Verilog-A module is instantiated to emulate the non-linear optical low-pass filtering response of a VCSEL and electrical components are built to form the electrical part of the VCSEL model. The schematic of the PAM-4/NRZ transmitter presented in this thesis is shown to have an energy efficiency of 5.6pJ/b (with serializer) and 3.71pJ/b (without serializer). Substantial improvements in vertical eye openings and jitter were recorded due to pre-emphasis.Item Designing Smart Ports by Integrating Sustainable Infrastructure and Economic Incentives(2020-05) Molavi, Anahita; Lim, Gino J.; Feng, Qianmei; Peng, Jiming; Shi, Jian; Vipulanandan, CumaraswamyPorts and harbors are facing stiff competition for market share and delivering more effective and secure flow of goods worldwide. High performing ports are implementing smart technologies to better manage operations meeting new challenges in maintaining safe, secure, and energy efficient facilities that mitigate environmental impacts. Key elements and associated challenges in the ports include operations (e.g., congestion, delays, operating errors, and lack of information sharing), environment (e.g., air, water and noise pollution, waste disposal, construction and expansion activities), energy (e.g., increasing energy consumption, increasing energy costs, and energy disruption impacts on the port activities), safety (e.g., berthing impacts, vessel collisions, and striking while at berth), and security (e.g., armed robbery, cyber security issues, unlawful acts, stowaways, drug smuggling, use of ports as conduit for moving weapons and terrorist attacks). In response to the existing problems, ports are adopting technology-based solutions, as well as new approaches to port operations planning and management. The implementation of such solutions to mitigate recent problems is known to be switching to smart ports. Although there are ongoing smart port initiatives around the world, a unified definition of a smart port has not been well documented. The proposed research attempts to conceptualize and define smart ports and enable them through the integration of sustainable infrastructure such as microgrids and onshore power supply. As defined by the Department of Energy (DOE), a microgrid is a relatively small-scale localized energy network that features an effective integration of high penetration level of Distributed Energy Resources (DERs), such as renewable energy resources, energy storage devices, and controllable loads. As the first contribution, we attempt to develop a framework for a smart port and a quantitative metric, Smart Port Index (SPI), that ports can use to improve their resiliency and sustainability. Our proposed SPI is based on Key Performance Indicators (KPIs) gathered from the literature. These KPIs are organized around four key activity domains of a smart port: operations, environment, energy, and safety and security. Case studies are conducted to show how one can use SPI and to assess the performance of some of the busiest ports in the world. Our methodology provides a quantitative tool for port authorities to develop their smart port strategies, assess their smartness, and identify strengths and weaknesses of their current operations for continuous improvement. Our study reveals that smart port initiatives around the world have different levels of comprehensiveness. The results of this study also suggest that government policies and region-specific variables can impact SPI value. The second contribution presents a systematic framework for evaluating the benefits of microgrid integration for industrial ports. Ports are critical infrastructure with significant power demands and emission reduction goals. These features make them the ideal candidates for exploring the opportunities that microgrids can offer. We demonstrate how a set of modified Smart Port Index (SPI) metrics can be incorporated into the port microgrid planning process to holistically improve the smartness of the port. A two-stage stochastic mixed-integer model was developed to evaluate the effectiveness of the proposed approach under operation uncertainties. The proposed model consists of an investment master problem in the first stage and a multi-objective operation planning subproblem in the second stage. Benders decomposition has been implemented for solving the stochastic model, and Lexicographic Goal Programming is applied to the subproblem to deal with multiple objectives in the model. Case studies were performed to evaluate the effectiveness of the proposed approach in enhancing major activity domains of a port. Numerical results indicate that compared with the minimum cost planning approach, the proposed framework is capable of improving the productivity, sustainability, and reliability of the port operations. This contribution also studies the investment and planning of onshore power supply (OPS) at port microgrids and analyzes and evaluates the benefits of OPS integration in improving port sustainability and energy efficiency. We show how OPS can be installed and planned along with the microgrids at ports to provide clean power to the vessels at berth. Numerical results illustrate that the integration of OPS along with port microgrid noticeably reduces emissions from the port activities without hindering the economics and competitiveness of the port entity. The last part of this dissertation studies ports' sustainable development and economic incentives that are designed for this purpose. To promote sustainability strategies and technologies at ports, policy-makers have introduced the concept of regulations and economic incentives. In this contribution, we analyze the process in which a regulatory authority defines regulations, incentives, and tax policies to motivate one or more ports in the region to initiate energy sustainability and emission-reduction efforts. We model the behaviors of both the regulatory authority and the participating ports in the form of a multi-objective mixed-integer nonlinear bilevel optimization problem to capture the hierarchy of the policy-making process and the existing competitions among the ports. The proposed model finds the optimal incentive and tax policies for the policy-maker in the upper-level and provides the ports in the region with the optimal choice of smart and sustainable energy solutions and service prices in the lower-level. Simulation results show that the proposed approach can effectively reduce the region-wide emission due to port activities while ensuring port entities' welfare, competitiveness, and sustainable growth as regional energy hubs.Item FPGA-based Data Acquisition System for SiPM Detectors(2019-05) Townsend, Jeremy Todd; Chen, Jinghong; Fu, Xin; Peng, JimingPositron Emission Tomography is a growing field with an increasing influence in nuclear medicine. Recent Time-of-Flight advancements have increased the cross-sectional image clarity and progression towards a miniature, silicon-based, photomultiplier has increased the resolution. As the photomultiplier size reduces, channel-count increases. Use of ASIC based Time-to-Digital converters with remote event processing is no longer viable due to the increasing cost and complexity. Instead, multichannel, FPGA based measurements with localized event processing is a must. Within this work, an FPGA based SiPM DAQ is presented with emphasis regarding the application of PET imaging. An introduction concerning PET concepts is included with a brief description of measurement techniques followed by a review of previous work with a detailed explanation of relevant methods. Next, an analysis of the problem covers specific implementation details and difficulties followed by a comparison of results to previous work. Use of a 28nm, Kintex 7 FPGA based DAQ is examined with implementation of a 32 channel, multichain averaged TDC with 15ps RMS resolution, 11ps average bin size, and less than 10ps of integral nonlinearity.Item Fractionated Treatment Planning of Radiation Therapy Considering Biological Response(2019-05) Nouri, Nasrin; Lim, Gino J.; Lee, Taewoo; Peng, Jiming; Varadarajan, Navin; Vipulanandan, CumaraswamyThe goal of radiation therapy for cancer patients is to kill tumor cells by damaging their DNA. For the majority of patients, the prescribed dose is divided into several treatment sessions (fractionated treatment plan) to avoid lethal damage to the surrounding healthy organs called organs at risk (OARs). In conventional practice, the treatment policy is to deliver an equal amount of radiation dose to the patient over multiple treatment sessions. Such an approach neglects different uncertainties associated with tumor dynamics, biological response to radiation, and organ motion that occur during radiation treatment. In this dissertation we are proposing methods to tackle the current challenges and shortcomings in radiotherapy treatment planning. In the first part of this dissertation, a constrained Partially Observable Markov Decision Process (POMDP) approach is proposed based on an extended biological model of cell survival to incorporate the biological response from the patient in the fractionated radiotherapy plan. A Gompertzian growth function is used to explain dependence of tumor growth rate on its density and shape. The aim of our model is to maximize the expected biological equivalent dose (EBED) of tumor, while keeping the OAR survival under control. Because the condition of a tumor can change and it is not fully observable through CT images during the treatment horizon, POMDP enables us to consider the tumor symptoms through probabilistic belief and partial observation probabilities. We provide a control limit policy to investigate whether there is an advantage of using POMDP over the conventional plan in terms of tumor damage and OAR sparing. Numerical results showed potential impact of the POMDP policies to enhance tumor coverage compared to the conventional plan. The resulting policies suggested the use of a low dose at earlier sessions, and a higher dose at later sessions. This result reflects the impact of tumor density and shape on its growth and biological response. The POMDP policy was not recommended if the tumor was a late responding tissue and its corresponding OAR was an early responding tissue. Unlike photon, the proton’s linear energy transfer (LET) increases as it penetrates the body. Therefore, proton therapy can be modulated to provide a better biological effectiveness. In the second part of this dissertation, we develop an LET-based IMPT optimization model that guarantees homogeneous biological effectiveness on the tumor structure and minimum damage to the OARs. The outcomes of this model serve as the action set in a constrained MDP framework developed to provide an optimal decision-making policy for dynamic and personalized fractionated proton therapy treatment plans. The tumor state is predicted using a random forest classification model built on radiomics data from CT images. The proposed model is implemented on the two cases of prostate cancer and pediatric ependymoma and compared to a regular IMPT model as the threshold. The results demonstrate that the LET-based IMPT model improves biological effectiveness and tumor control probability (TCP). Randomized MDP policies suggest a smaller dose target for a high tumor cell count where the tumor growth rate is at its lowest value. But as the tumor cell count decreases, a larger amount of dose is suggested to destroy faster growing tumor. Proton’s unique physical characteristics make proton therapy sensitive to organ motion such that a voxel can receive a nonuniform dose deposition between different fractions. Therefore, biological effectiveness of the treatment might deviate from the planned effectiveness. In the last part of this dissertation we develop a model to optimize the fractionation and IMPT problems at the same time. We use 4DCT data set for planning a 3D delivery technique to handle complex respiratory motion patterns while avoiding sophisticated 4D delivery systems. Two models are used to solve this problem; a statistical mean-variance model, and a robust worst-case model. The worst-case robust model provides a more robust dose distribution over all structures compared to statistical mean-variance model. Both models suggest larger amount of radiation dose in the first week of treatment and gradually decreasing the dose towards the last week. The resulting weekly mean BED is shown to be almost equal in all treatment weeks, compensating for the increased repair effect resulting from nonuniform voxel dose between fractions. Because of conservatism of worst-case robust model, a larger total dose has to be delivered in every treatment week to achieve the same biological effectiveness as statistical mean-variance model.Item High-Performance CMOS Front-End ASICs for SiPM Detectors and High-Frequency Ultrasound and Photoacoustic Imaging(2021-12) Tang, Yuxuan; Chen, Jinghong; Zagozdzon-Wosik, Wanda; Chen, Yuhua; Fu, Xin; Peng, Jiming; Jackson, David R.Silicon photomultiplier (SiPM), as a high sensitivity photon detector, has been widely used in high energy physics, positron emission tomography imaging, and light detection and ranging applications. The slow-rising edge of standard SiPM signal, however, makes the timing measurement sensitive to noise and leads to poor timing resolution. Besides, the SiPM energy measurement utilizing charge-sensitive amplifiers suffers from high power consumption and is not suitable for array-based SiPM readout systems. To solve these issues, two hardware prototypes in a 180 nm CMOS process have been fabricated and experimentally characterized. The first prototype is a single-channel SiPM readout featuring an on-chip fast signal generator and a customized successive-approximation-register (SAR) analog-to-digital converter (ADC). The on-chip fast-signal generator sharpens the slow-rising edge of SiPM signal improving the timing resolution. The customized ADC uses the SiPM charge integrator as the ADC track-and-hold circuit lowering the ADC power consumption. Measurement results show the readout front-end achieves a timing resolution of 151 ps, while dissipating 4.02 mW of power. The second prototype demonstrates a shared SAR ADC architecture in multi-channel SiPM readout to reduce the chip area and power consumption. The ADC is shared by 16 readout channels in a time-multiplexed manner, and achieves an SFDR of 58.34 dB and an SNDR of 51.37 dB at 16 MS/s. High-frequency (30 to 100 MHz) ultrasound and photoacoustic imaging with improved microscopic resolution opens new medical applications in ophthalmology, intravascular imaging and systemic sclerosis. To break the tradeoff between noise and wideband impedance matching, a wideband low-noise amplifier (LNA) with noise and distortion cancellation is developed. The LNA employs a resistive shunt-feedback structure with feedforward noise-canceling technique to accomplish both wideband impedance matching and low-noise performance. A complementary CMOS topology is also developed to cancel the second-order harmonic distortion and enhance the linearity. A front-end including the proposed LNA and a variable gain amplifier is designed and fabricated in a 180 nm CMOS process. At 80 MHz, the front-end achieves an input-referred noise density of 1.36 nV/sqrt(Hz), an S11 better than -16 dB, and a total harmonic distortion of -55 dBc while consuming 37 mW of power.Item Intensity Modulated Proton Therapy Optimization Under Uncertainty: Field Misalignment and Internal Organ Motion(2016-12) Liao, Li; Lim, Gino J.; Feng, Qianmei; Peng, Jiming; Zhang, Xiaodong; Zhu, X. RolandIntensity modulated proton therapy (IMPT) is one of the most advanced forms of radiation therapy, which can deliver a highly conformal dose to the tumor while sparing the dose in healthy tissues. Compared to conventional photon-based radiation therapy, IMPT is more flexible in delivering radiation dose according to different tumor shapes. However, this flexibility also makes the optimization problems in IMPT harder to solve, e.g., it requires larger memory to store data and longer computational time. Furthermore, proton beams are very sensitive to different uncertainties, such as setup uncertainty, range uncertainty and internal organ motion. These uncertainties can greatly impact the quality of clinical treatment. Therefore, this dissertation aims to investigate different optimization methods for treatment planning and to handle a variety of uncertainties in IMPT. First, to solve the fluence map optimization (FMO) problem in IMPT, we propose a method to formulate the FMO problem into a molecular dynamics model. So that, the FMO problem can be optimized according classical dynamics system. This method combines the advantages of gradient-based algorithms and heuristic search algorithms. Next, we develop and validate a robust optimization method for IMPT treatment plans with multi-isocenter large fields to overcome the dose inhomogeneity problem caused by the setup misalignment in field junctions. Numerical results show that the robust optimized IMPT plans create a low gradient field radiation dose in the junction regions, which can minimize the impact from misalignment uncertainty. Compare to conventional techniques, the robust optimization method leads the whole treatment much more efficient. Lastly, we focus on a two-stage method to solve the beam angle optimization (BAO) problem in IMPT with internal organ motion uncertainty. In the first stage, a $p$-median algorithm is developed for beam angle clustering. In the second stage, a bi-level search algorithm is used to find the final beam angle set for the treatment. Furthermore, Support vector machine (SVM) is used for beam angle classification to reduce the search space and the 4D-CT information is incorporated to handle the internal organ motion uncertainty. Results show that the two-stage BAO method consistently finds a high-quality solution in a short time.Item Lévy, Non-Gaussian Ornstein-Uhlenbeck, and Markov Additive Processes in Reliability Analysis(2016-08) Shu, Yin; Feng, Qianmei; Liu, Hao; Lim, Gino J.; Peng, Jiming; Kao, Edward P. C.Unavoidable degradation is one of the major failure mechanisms of many systems due to internal properties (mechanical, thermal, electrical, or chemical) and external influences (temperature, humidity, or vibration). Such degradation in critical engineering systems (e.g., pipelines, wind turbines, power/smart grids, and mechanical devices, etc.) takes the form of corrosion, erosion, fatigue crack, deterioration or wear that may lead to the loss of structural integrity and catastrophic failure. Therefore, developing stochastic degradation models based on appropriate stochastic processes becomes imperative in the reliability and statistics research communities. This dissertation aims to develop a new research framework to integrally handle the complexities in degradation processes (the intrinsic/extrinsic stochastic properties, complex jump mechanisms and dependence) based on general stochastic processes including Lévy, non-Gaussian Ornstein-Uhlenbeck (OU), and Markov additive processes; and to develop a new systematic methodology for reliability analysis that provides compact and explicit results for reliability function and lifetime characteristics. First, to handle the intrinsic stochastic properties and complex jumps, we use Lévy subordinators and their functional extensions, Lévy driven non-Gaussian OU processes, to model the cumulative degradation with jumps that occur at random times and have random sizes. We then integrally handle the complexities of a degradation process including both intrinsic and extrinsic stochastic properties with complex jump mechanisms, by constructing general Markov additive processes. Moreover, the models are extended to multi-dimensional cases for multiple dependent degradation processes under dynamic environments, where the Lévy copulas are studied to construct Markov-modulated multi-dimensional Lévy processes. The Fokker-Planck equations for such general stochastic processes are developed, based on which we derive the explicit results for reliability function and lifetime moments, represented by the Lévy measures, the infinitesimal generator matrices and the Lévy copulas. To analyze the degradation data series from such degradation phenomena of interest, we propose a systematic statistical estimation method using linear programming estimators and empirical characteristic functions. We also construct bootstrap procedures for the confidence intervals. Simulation studies for Lévy measures of gamma processes, compound Poisson processes, positive stable processes and positive tempered stable processes are performed. The framework can be recognized as a general approach that can be used to flexibly handle stylized features of widespread classes of degradation data series such as jumps, linearity/nonlinearity, symmetry/asymmetry, and light/heavy tails, etc. The results are expected to provide accurate reliability prediction and estimation that can be used to assist the mitigation of risk and property loss associated with system failures.Item Maritime Vehicle Routing under Uncertainty: Liquefied Natural Gas Shipping and Offshore Pipeline Damage Assessment Problems(2016-08) Cho, Jaeyoung; Lim, Gino J.; Peng, Jiming; Tekin, Eylem; Vipulanandan, Cumaraswamy; Nikolaou, MichaelMaritime vehicle routing and scheduling problem has been studied extensively in the context of risk mitigation. This dissertation addresses three maritime vehicle routing problems and its mathematical frameworks considering environmental uncertainty. First, LNG shipping problem is investigated considering LNG market change, ship construction technology advances and random boil-off gas (BOG) generation. This is formulated as a two-stage stochastic mixed integer program. In the initial stage, a single production-inventory plan and routing schedule is determined before the realization of the random BOG generation. For every possible realization of the random BOG, the second-stage variables are represented by the amount of LNG surplus or shortage when an LNG carrier arrives at a regasification plant. This model provides a flexible transportation strategy reflecting LNG market trend and diversified LNG carrier specifications. Second, LNG production-inventory planning and ship routing under random weather disruptions is discussed. This problem is formulated to two optimization models: a two-stage stochastic mixed integer programming model and a parametric optimization model. The first one maximizes the overall expected revenue while minimizing disruption cost which results from extreme weathers. The second one, a parametric optimization model, attempts to reflect the decision maker's preference on risks by varying the ratio of revenue to on-time delivery. Therefore, a decision maker can have a 'what-if analysis' to compare multiple options for the final planning decision. Stochastic production-inventory control constraints set is also developed which synchronizes production-inventory plan and LNG carrier routing schedule under weather disruption. Lastly, offshore pipeline networks damage assessment problem is discussed. In order to collect how/what might have caused pipeline damages by a weather disruption, multiple AUVs are pre-positioned at some selected underwater locations before the beginning of the extreme weather. Once the weather clears up, the pre-deployed AUVs start pipeline damage assessment. This problem is formulated as a two-phased multiple AUVs pre-positioning and routing model. The first phase problem is to determine optimum AUVs' pre-positioning locations considering maximum AUV operating distance and random weather impact. In the second phase, AUV paths are generated to scan the designated offshore pipeline networks while minimizing operating cost proportional to the number of pre-deployed AUVs.Item Multi-dimensional Lévy Processes and Lévy Copulas For Dependent Degradation Processes in Reliability Analysis(2019-12) Shi, Yu; Feng, Qianmei; Lim, Gino J.; Peng, Jiming; Cheng, Liang Chieh; Rao, Jagannatha R.Cumulative degradation is one of unavoidable failure mechanisms that occur in many engineering systems in chemical, civil, mechanical and other fields. These deterioration phenomena are caused by internal structures and dynamic external conditions. In critical systems (e.g., aircrafts, power systems, and railways), such degradation can lead to operational failure of a system, loss of economic profit, and even endangered human lives. Moreover, multiple dependent degradation processes can happen in a system simultaneously. In order to avoid failures in engineering systems, it becomes critical and urgent in reliability studies to develop new multi-dimensional dependent degradation models using appropriate stochastic processes. This dissertation aims to develop a framework to integrally handle the complicated degradation with uncertain jumps in multi-dimensional dependent degradation processes based on multi-dimensional Lévy processes and various Lévy copulas for reliability and lifetime analyses in various industries. To model the common degradation that is non-decreasing over time, we use Lévy subordinators that are a class of Lévy processes with non-decreasing paths. Random jumps are described by special Lévy measures for Lévy subordinators of interest. The relationship between high-dimensional Lévy copulas and the associated multi-dimensional Lévy measures is introduced in Chapter 3, which is the foundation to model the internal structures for multi-dimensional degradation processes. Based on multi-dimensional Lévy measures and high-dimensional Fokker-Planck equations, we derive the Laplace transform expressions for reliability function and lifetime moments. In Chapter 4, we study the reliability and lifetime through characteristic functions of multi-dimensional stochastic processes. We derive the reliability function for a two-dimensional degradation process modeled by two-dimensional Lévy subordinators. In Chapter 5, we consider the degradation and random jump with time dependence and extend the Lévy subordinators to non-homogeneous subordinators. The marginal/joint reliability functions and probability density functions of lifetime are derived for different types of Lévy measures. To illustrate our proposed models, we simulate multi-dimensional Lévy subordinators and conduct numerical analysis for these simulated processes under Lévy copulas, respectively. The results demonstrate that our multi-dimensional Lévy subordinator and non-homogeneous subordinator models perform well and provide a new methodology to analyze degradation, reliability and lifetime for a system that degrades over time.Item Novel Applications of Optimization Models in Drone Routing and Scheduling(2021-05) Park, Hyungjin; Lim, Gino J.; Peng, Jiming; Vipulanandan, CumaraswamyDrone technologies can have a positive impact on surveillance, emergency response, and delivery. Many existing optimization models in drone routing and scheduling focus on minimizing the cost or time required to complete a mission. This study explores novel applications of drones for healthcare delivery and structural inspection considering the physics of battery consumption that are often ignored in the Operations Research community. The COVID-19 pandemic has affected everyone in ways never imagined and various social distancing measures are in place to reduce the spread of viruses. If at-home testing kits are safely and quickly delivered to a patient, it can potentially reduce human contact and positively affect disease spread before, during, and after diagnosis. Hence, the first subject of this thesis proposes testing kit delivery schedules using drones based on the Mothership and Drone Routing Problem (MDRP). Optimization models and a decomposition-based solution methodology are developed to solve the complex model. The performance on virus spread reduction rate was measured by the ‘R’ method. Computational results show that the proposed approach (R = 0.002) resulted in considerably lower infection risk compared to the face-to-face testing practice (R = 0.0153). The second subject of this thesis introduces drone path planning for structural inspection considering the physics of battery consumption. The short battery duration of drones remains a major problem for small drones. Considering the shape of large structures, drones have a variety of flight dynamics during a mission, in which certain moves require a faster battery consumption than others. However, these factors have not been thoroughly considered in the existing routing models. Hence, this study examines different aspects of routing drones to cover multiple inspection points distributed on a three-dimensional structure. Both MIP models (labelled as SFD and MEC) are developed to obtain optimal routing strategies for both the shortest distance and the minimum battery consumption. Numerical results show that the optimal solutions form these two models produce different paths. Understanding that each decision maker may have different preference between those two objectives, a bi-objective optimization model has been developed to find an efficient frontier of solutions to satisfy the decision maker’s preference.Item Novel Parallel Algorithms for a Class of Deterministic Linear Optimization Problems(2014-08) Ma, Likang; Lim, Gino J.; Feng, Qianmei; Peng, Jiming; Gabriel, Edgar; Johnsson, LennartThe p-median problem and Intensity-Modulated Radiation Therapy (IMRT) treatment planning problems are very important practical applications in the area of optimization. Real-life instances of both problems are time-consuming to solve using traditional solution techniques. However, both problems can be involved in time-sensitive decision-making processes, in which rapid and accurate solutions are required. This study explores parallel computational algorithms and implementations for these two discrete optimization problems. Specifically, we address the use of Graphics Processing Unit (GPU) and Central Processing Unit (CPU) based algorithms that are specific to the needs of real-life applications. The p-median problem is often used to model many real-world situations, which is NP-hard. Although the polynomial algorithm is available when the number of median is fixed, large scale p-median problems are still very difficult to solve. Previous studies in using a GPU to solve the p-median problem in parallel are limited. We propose the design and implementation of the parallel Vertex Substitution (pVS) algorithm for the p-median problem based on high-performance, many-core GPUs. pVS is based on the best profit search algorithm, an implementation of Vertex Substitution (VS), that is shown to produce reliable solutions for the p-median problem. Numerical experiments show pVS achieved speed gains ranging from 10x to 57x over the traditional CPU-based Vertex Substitution. The Fluence Map Optimization (FMO) problem in IMRT can be modeled as a large-scale LP problem. Real-life FMO problem can be time-consuming to solve using traditional sequential LP solvers. We developed a GPU-based parallel linear programming solver (GPU LP solver) using the Bounded Variable Simplex algorithm with Steepest-edge pricing for large-scale sparse LP problems. This solver is designed for general linear programming problems and can be used in branch-and-bound techniques for mixed integer programming problems. We propose a parallel explicit matrix update method to replace transformation-based matrix update in sequential simplex. A special sparse matrix format is designed so as to improve the speed of sparse column selection and parallel matrix operations. We tested our GPU-based LP solver in two FMO problems and obtained 2x speedup compared to CPLEX 12.1. The Beam Angle Optimization (BAO) problem in IMRT is a Combinatorial Optimization Problem (COP), which is very difficult to obtain an optimal solution. Previous studies explored the theories and implementations of solution techniques for general COP problems. However, applications of such techniques to IMRT problems usually applies many approximations, which may impact the quality of the final solution. The parallelization of those techniques for BAO are also limited in the literature. We focused our research on CPU-based parallel algorithms in the applications of IMRT treatment planning using the Message Passing Interface (MPI). We developed an MPI-based Master-Worker framework for solving BAO problems using various types of algorithms including Genetic Algorithm and Simulated Annealing. The proposed framework separates integer variables from the MIP model and uses optimal LP solutions as evaluation functions. We developed a hybrid framework to communicate between algorithms in parallel. The results of numerical experiments demonstrate that this framework is 5x faster than traditional solution techniques and is able to obtain a clinic-standard treatment plan in a very short time.Item Optimal Scheduling Models And Algorithms Of Integrated Microgrids(2019-12) Wu, Yiwei; Lim, Gino J.; Peng, Jiming; Lee, Taewoo; Krishnamoorthy, Harish S.; Shi, JianThe microgrid is a distribution system that integrates the increasing number of renewable energy resources, storage systems and controllable loads to support a flexible and reliable renewable energy distribution. Currently, microgrids can be used for a broader range of applications in the rural area and disaster restoration efforts, and enable higher efficiency in managing uncontrollable renewable energy resources such as wind and solar. However, there are operational and technological problems using the microgrids that need to be resolved so that the entire electrical community will receive benefits of having clean and high-quality power with lower cost. We have identified three optimization problems in this dissertation: 1) a operational problem to find optimal electrical power price and quantity when microgrids should trade (sell/buy) surplus/lacking power with distribution system, 2) a technological problem to use the minimum cost to deal with operation uncertainties such as generators’ output and operation mode change when the operator schedules a microgrid, and 3) a managerial problem to co-optimize the energy and ancillary service interaction between microgrids and power system. This work will provide insights into these problems and give some practical solutions. First, we provide a solution of designing a competitive decentralized distribution system. In addition, we identify a clear definition of the role that microgrids can play in this electrical market so that the microgrid operators can achieve maximum benefits. Second, we provide a stability opportunity risk index to evaluate the effects of microgrid operator managing the scheduling uncertainties. Then, a co-optimization scheme is developed for microgrid operator to schedule ancillary service from external resources (distribution system) and internal resources (disputable units). Third, a transactive management scheme provides a decentralized solution by constructing a boundary between the responsibility of microgrid and distribution system. By having a bi-directional energy and ancillary service scheme between two entities, the efficiency of market operation is improved.Item Optimization Framework for Drone Operations under Constrained Battery Duration(2018-12) Kim, Seon Jin; Lim, Gino J.; Feng, Qianmei; Peng, Jiming; Mo, Yi-Lung; Vipulanandan, CumaraswamyDrones, known as unmanned aerial vehicles, receive a tremendous amount of attention from civilian, commercial, and military sectors across the globe as means of monitoring situations in real time and delivering demands. There has been an increasing interest and active research in drones recent years because they are cheaper and easier to operate. However, one of the major obstacles of these drones is that they are operated by batteries, which severely limit on flight duration for drones to be practically useful in many applications. Hence, the primary goal of this dissertation is to develop an optimization framework for operating drones under constrained battery duration. First, a framework is developed for routine healthcare service and emergency damage assessment service, in which optimal flight paths of drones and locations of ground control centers are optimized under limited battery duration. Additionally, a two-phase optimization framework is also developed to reduce the amount of battery consumption that drones spend to reach the damaged area. Second, a robust optimization framework is proposed to handle the uncertainty in temperature-induced battery capacity reduction. Furthermore, new battery recharging methods are developed to extend the flight duration per charge from the initial launching point. A dynamic wireless battery charging concept is developed to prolong the flight duration of drones for routine monitoring service such as border surveillance. In addition, a hybrid mode consisting of a dynamic wireless battery charging system and a stationary wireless battery charging system is developed to compensate the major drawback of each of the charging systems. Third, a rerouting process for drones is developed to find an alternative flight path when drones counter an insufficient remaining battery duration to ensure safe recovery. Undesired flight environments such as strong winds can trigger excessive battery consumption of drones. Such an environment can also cause an uncertainty in the flight time between the waypoints. Hence, a chance constrained programming method is developed as an optimization framework to find an optimal alternative flight path under uncertain flight time.Item Optimization Model for Optimal Allocation of Mobile Health Clinics(2017-08) Majeed, Bilal; Peng, Jiming; Ding, Xin; Wang, YapingMobile health clinics can be used as an effective resource in healthcare delivery system, especially for underprivileged communities. However, mobile health clinics are one of the most underutilized healthcare resources; because the optimal management of resources is a convoluted task in mobile health clinics system. Nevertheless, using data driven research and optimization techniques, efficacy of mobile health clinic programs can be enhanced. This thesis provides an optimization model for optimal allocation of mobile health clinics. Actual mobile health clinic data from the vaccinations program at Texas Children Hospital has been used, and research has been carried to maximize the demand coverage and overall health outcomes. A mathematical model is formulated for this problem and is solved using CPLEX solver via GAMS (General Algebraic Modeling System), a high-level modeling system with mathematical programming and optimization. Numerical results have demonstrated the efficacy of optimization models and techniques in enhancing the demand coverage.Item Optimization of Radiation Therapy Treatment Planning Considering Setup Uncertainty and Radiobiological Effects(2019-05) Khabazian, Azin; Lim, Gino J.; Mohan, Radhe; Peng, Jiming; Lee, Taewoo; Cao, WenhuaThe clinical goal of radiation therapy is to maximize tumor cell killing while minimizing toxic effects on surrounding healthy tissues. A treatment protocol is used to decide on the treatment strategy and is a description of the desired radiation dose to the various regions of interest. Treatment planning then aims to find a plan as close to the treatment protocol as possible (Romeijn and Dempsey (2008)). Every step of radiation therapy is subject to some types of uncertainties (i.e., setup uncertainty, patient motion, and tumor shrinkage), which may compromise the quality of treatment. Basically, in treatment planning, a region of the patient where both tumor and organs at risk (OARs) are located with a certain probability is irradiated with a lower dose than the prescribed tumor dose. However, under uncertainty, the nearby healthy organs that should be irradiated by lower dose are always occupied by tumor voxels with a higher dose. Although the more ambitious goal is to damage the tumor cells so as to guarantee total tumor coverage for treatment, severe patient complications can occur when the surrounding healthy tissues receive an excessive amount of the radiation dose. Therefore, it is desired to develop an optimization approach to meet prescription requirements and tackle the uncertainties in radiation therapy treatments. The proposed research attempts to overcome these limitations and find optimal beamlet intensity that will deliver a dose distribution close to the prescribed dose lead to a better sparing of healthy tissues. First, to control the safety of the critical organs at risk during radiation as well as to provide sufficient tumor coverage, a Chance Constrained Programming (CCP) (Charnes and Cooper (1959)) approach is presented to handle setup uncertainty in radiation treatment planning that allows constraint violation up to a certain degree as it is the case in practice. We assume the uncertain dose distribution is governed by a known probability function and demonstrate that the proposed CCP model can solve the treatment planning problems efficiently. Second, a CCP framework for radiation therapy treatment planning is considered, in which the probability distribution of the random dose contribution is not completely specified, but is only known to belong to a given class of distributions. Sometimes, the information on hand for the random parameter might be limited to mean, covariance, and/or support of the uncertain data. In these situations, Distributionally Robust Chance Constrained Programming (DRCCP) (Calafiore and El Ghaoui (2006)) can be considered as a natural way to deal with uncertainties. An explicit convex condition is provided that guarantees the satisfaction of the probabilistic treatment planning constraints for any realization of the distribution within the given class. Third, to systematically quantify the biological effects of radiation beams, a linear energy transfer (LET) is incorporated into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. Conventionally, the IMPT optimization criteria only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. We then explore the effect of this optimization to not only produce satisfactory dose distributions but also to achieve reduced LET distributions (thus lower biologically effective dose distributions) in critical structures and increased LET in target volumes compared to plans created using conventional objectives. Moreover, to effectively treat a cancer patient with radiotherapy, an effective treatment strategy must be in place that considers dose delivery history and the patients’ on-treatment biological changes. However, assessing the biological impacts of radiation on a tumor and the nearby healthy structures is not an easy task. But, the response of the cells to the radiation can be categorized by volume change, and these changes can be investigated by mathematical models that approximate reality. In this study, we seek to understand the importance of considering tumor shrinkage and proliferation during radiation treatment and how this affects the optimal prescribed dose in each fraction. We propose a stochastic sequential optimization structure under setup uncertainty of dose delivery, that optimizes the dose in various fractions of an adaptive radiation therapy treatment plan by comparing the damage in tumor cells against the damage to the normal tissues volumetrically. Thus, while not prescribing specific strategies, this report provides the framework and guidance physicians to make appropriate decisions in implementing a safe and efficient treatment plan in their clinics on an individual patient.Item Risk-based Optimization Models for Maritime Safety and Security(2016-08) Biobaku, Taofeek; Lim, Gino J.; Feng, Qianmei; Peng, Jiming; May, Elebeoba E.; Han, ZhuConsidering that unprotected assets and infrastructures in the Maritime industry are vulnerable to attacks, we present models and methodologies for protecting these maritime resources from malicious or terrorist attacks. Using risk-based analysis, we use conditional probabilities to establish relationships between consequences, vulnerabilities and threat incidences of maritime events. In the first part of this dissertation, we address safety/security of maritime assets. We consider vessel routing and scheduling in LNG vessels as a hazardous cargo, and present a risk-based methodology in the choice of alternate vessel routes between a liquefaction terminal and receiving depot(s). While derivations are presented for the quantification of each constituent of the risk-based model, actual historical data of terrorist/piracy attacks made available by a national consortium on the study of terrorism are used in the analysis approach. With a multivehicle routing model, we test our methodology and present results using a practical test case involving delivery of LNG. In the second part of this dissertation, we address safety/security of maritime infrastructures and use underwater sonars for threat detection. Models and algorithms are developed for providing surveillance to maritime infrastructures such as ports, harbors, jetties, etc. The methodologies in these models include a quantitative risk analysis approach, a network fortification approach, a greedy-based heuristic approach, and a robust optimization approach. The network fortification approach considers the ability of an intending ‘attacker’ to possess information related to resource limitations and protection procedure of a ‘defender’. Consequently, the ‘attacker’ attempts to use this information to evade detection, thus compromising safety and security of maritime infrastructures. In developing greedy-based algorithms to solve large scale problems in our placement methodology, we exploit the principle of submodularity to propose efficient solution algorithms with some theoretical guarantees. Lastly, we developed a robust formulation for our placement methodology to address uncertainties related to some modeling parameters. To illustrate that the new sonar placement methodologies developed help to improve protection coverage plans for maritime infrastructures, we use practical case studies to provide safety and security to ports. In addition, we provide analytical and experimental results on each of these studies.