Browsing by Author "Lim, Gino J."
Now showing 1 - 20 of 70
- Results Per Page
- Sort Options
Item A New Approach to Measure Intangibles in the Economic Analysis of Advanced Technology Projects(2016-05) Eldressi, Khaled; Lim, Gino J.; Feng, Qianmei; Tekin, Eylem; Akladios, Magdy; Parsaei, Hamid R.Current global demand for products with more advanced features and capabilities, less weight, and increased aesthetics has driven manufacturers to make significant investments in machinery and tools. Company decisions to invest in advanced technologies are often strategically aimed toward short-term return and frequently do not conform to traditional cost accounting practices, which, in many cases, may lead to rejection of investment due to inappropriate measurement techniques. In revitalizing the manufacturing sector of the United States, manufacturing companies have been encouraged by a multitude of incentives to invest capital in plant and equipment enhancements in order to meet and exceed market expectations. The capital investment made by these companies is expected to enhance the capacity to make new products while expanding existing production capacity. The investments in advanced manufacturing and technology systems are often extensive and their successful implementation requires the full support and commitment of senior management. Traditional justification methods are often directly tied to company cash flow and short return period and the investments in advanced technology projects are often rejected as a result of the long-term return commitment. In this research, we have developed a methodology to measure the intangible attributes, yet include both tangible and intangible attributes in the economic decision-making process. Multiple attributes that may influence the decision process are included and measured in the proposed method. We present a comprehensive numerical example demonstrating the capability of the methodology. Additionally, we present conclusions and recommendations for future research in this important area to the manufacturing sector.Item A Novel Approach to Robust Design Using Recent Advances in Robust and Multiobjective Optimization Methods(2015-12) Joseph, Gregory; Rao, Jagannatha R.; Grigoriadis, Karolos M.; Song, Gangbing; Lim, Gino J.; Feng, QianmeiCurrent advances in the fi eld of Robust Optimization (RO) from such authors as Azarm, Ben-Tal, Elishakoff , Zhang, Renaud and others have led to new and interesting approaches to the treatment of uncertainty in traditional engineering problems. This paper presents the Budget of Uncertainty (BoU) design method; a new method by which such approaches can be applied in a manner which balances the need for optimization with the desire for robust solutions. Where previous work has focused on immunizing an optimization problem against pre-set uncertainty ranges, the BoU method adds additional design variables in an eff ort to solve for an appropriate uncertainty range. The BoU method simultaneously determines an optimum solution and an allowed uncertainty budget within a restricted feasibility space. The result is a solution that guarantees fi rst order satisfaction of uncertain constraints and provides a measure of problem sensitivity to its uncertain parameters. This provides additional insight to early problem development, and can potentially create alternatives to traditional approaches such as Monte Carlo analysis. Within this work we will present a summary of current RO research and introduce the BoU method. We will then apply the BoU method to a simple 2D geometric problem to illustrate its application. Finally, we tackle two well-studied engineering design problems, the Golinksi Speed Reducer and the simple Helical Spring design problem to show a more realistic application of the new method.Item A Reinforcement Learning Approach For UH Leduc Poker(2021-08) Sanghani, Parth R; Eick, Christoph F.; Chen, Guoning; Lim, Gino J.Poker, especially Texas Hold’em Poker, is a challenging game and top professionals win large amounts of money at international Poker tournaments. Consequently, Poker has been a focus of AI research to develop agents that play Poker intelligently. Challenges of Poker include partial observability, the need for probabilistic reasoning as Poker hands are dealt randomly, the difficulty to deal with an unknown adversary, the capability to bluff, and the difficulty of assessing the quality of a Poker hand in a particular game context. Leduc Hold’em Poker is a popular, much simpler variant of Texas Hold’em Poker and is used a lot in academic research. This work centers on UH Leduc Poker, a slightly more complicated variant of Leduc Hold’em Poker. The goal of this thesis work is the design, implementation, and evaluation of an intelligent agent for UH Leduc Poker, relying on a reinforcement learning approach. In particular, our approach employs Deep Q-Learning and the agent is implemented by using TensorFlow in Python. The UH Leduc Poker Agent is trained by playing tournaments against a fixed policy agent that plays smartly, according to the quality of its hand and the current state of the game. We also investigate the influence of different reinforcement learning parameters on the agent's performance. Finally, we conducted experiments that assess how well the UH Leduc Poker agent plays against some fixed policy agents as well as human beings.Item A Scalable Variational Inequality-Based Formulation That Preserves Maximum Principles for Darcy Flow with Pressure-Dependent Viscosity(2017-08) Mapakshi, Nischalkarthik; Nakshatrala, Kalyana Babu; Willam, Kaspar J.; Vipulanandan, Cumaraswamy; Lim, Gino J.The overarching goal of this thesis is to present a robust and scalable finite element computational framework based on Variational inequalities (VI) which models nonlinear flow through heterogeneous and anisotropic porous media without violating discrete maximum principles (DMP) for pressure. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations, and previous studies have shown that the VI approach can enforce DMP for linear and semi linear subsurface flow and transport problems. Herein, the same VI framework is extended to the nonlinear Modified Darcy flow (MDF) model which incorporates pressure dependent viscosity. Although it can be proven that the MDF model satisfies maximum principles, most finite element formulations, including the classical Galerkin formulation with Raviart-Thomas elements and the variational multi-scale formulation, will not adequately enforce the DMP if strong levels of anisotropy are present. Several representative reservoir problems with realistic parameters are presented, and both the algorithmic and parallel scalability of the proposed computational framework are studied.Item An Optimization Framework for Resilience-based Power Grid Restoration(2018-08) Abbasi, Saeedeh; Lim, Gino J.; Lee, Taewoo; Peng, Jiming; Barati, Masoud; Vipulanandan, CumaraswamyPower outage is a terrible consequence of an extreme event that affects a wide range of consumers including homes, hospitals, and commercial industries. An extreme event such as a hurricane, windstorm or earthquake can disrupt power grids located in open areas. In a power grid, transmission lines are the most vulnerable equipment and their damage usually results in a cascading failure of the whole network. Although a power system should be strengthened in advance to withstand these events, having a plan to restore the failed power grid is essential. Emergency generation units play an important role in a restoration process; these pre-located units are called black start (BS) units. The restoration process with BS units is conducted through a parallel restoration over independent sections within a network. Appropriate sectionalization provides a more resilient power system against a long outage. Assessing and optimizing the resilience of a power system could improve the quality of the restoration process. To achieve this resilient power system, a mathematical model is presented to maximize the system’s resiliency while planning a restoration process. The system resiliency is measured through an innovative resilience vector. As a result, the restoration would be performed quickly to satisfy all critical demands. The model is a mixed integer programming (MIP), which is decomposed to a bi-level model where it can be solved in the lower complexity. Rather than the bi-level programming, a mathematical programming with equilibrium constraints (MPEC) approach is applied to solve the model. The comparison between the results of both methods demonstrates the high efficiency of bi-level programming solution methodology in a large-scale case. A pre-emptive goal programming (PEP) method also supports the solution methodologies to take care of multiple terms with different scales and priorities in the objective function of the model. The model is analyzed by 6- and 118-bus IEEE standard test systems. Sectionalization of a transmission network has a close association with partitioning a graph (i.e., the lines are considered as edges and grid buses are the same as graph nodes). The graph partitioning problem (GPP) is formulated as a MIP model to minimize the amount of dis-joint edges so that well connected sections can be formed. Therefore, the proposed restoration model is combined with the GPP model. In this order, the sectionalization constraints are replaced with GPP constraints while the GPP objective is added to the model’s objective. The new GPP-based restoration model is examined for both 6- and 118-bus case studies and the results are compared with the first sectionalization approach. The analysis of advantages and disadvantages for the first and second restoration models is currently under work. Both proposed deterministic models are solved under the assumption of a given status for the transmission network after disruption; however, it is rarely possible to have a precise prediction on the post-status of a transmission network following an extreme weather. Hence, the post-status of transmission lines can be considered as a source of uncertainty. In this study, a robust optimization model is provided to take care of this uncertainty. The proposed robust model is a scenario-based model of the GPP-based model. These scenarios are prepared based on simulated hurricane wind speeds and the fragility profile of transmission lines. Furthermore, the worst-case model against all realization of the grid post-status is provided. The result on 118-bus test system gives a reliable solution for all realization of the scenarios with a narrow band in objectives performance measures. Dealing with large network-structured systems such as a power system is difficult. For this reason, a parallel processing is recommended by partitioning the network; which can facilitate the process by reducing the size of the target network at each moment. The common partitioning criterion is modularity while considering another metric beside it is beneficial to the result. The final chapter of the dissertation addresses the undirected network partitioning challenges in the vulnerability of the partitions via maximization of edge-connectivity and modularity. The edge-connectivity is a graph metric, which represents the robustness of the sub-networks and its optimization, enhance the robustness of the partitions. The problem is formulated as a bi-objective maximization model. The results on multiple random test cases of different sizes are analyzed to demonstrate the model’s performance.Item ANALYTICAL MODELING AND VERIFICATION WITH NUMERICAL METHODS AND EXPERIMENTS OF THREE PHASE REACTIVE MATERIALS, DRILLING, CEMENTING AND PERFORMANCE OF SMART CEMENTED MODEL OIL WELLS(2016-12-15) Basirat, Bahareh; Vipulanandan, Cumaraswamy; Nakshatrala, Kalyana Babu; Rixey, William G.; Samuel, Robello; Lim, Gino J.In this study new analytical models were developed based on using a nonlinear rheological model to predict the drilling and cementing of the oil well. Also, the long term performance of a model field well that was cemented using the smart cement was predicted using a nonlinear piezoresistive model and numerical model. The new reactive three phase material model was developed to characterize the three phase material with reactive constituents and to introduce the six independent reactive material parameters that depend on the curing time, temperature and pressure which contribute to the phase transition. This model can be used for any material with three phases such as cement, drilling mud, filter cake, oil rich rocks and medicines. In order to verify the model, cement slurry with water-cement ratio of 0.4 was tested for over 800 days. The changes in the weight, volume and the moisture content with the curing time was monitored to quantify the change in the three phases. Influence of the six material parameters on the shrinkage, porosity and electrical resistivity of the solidified cement were verified. The resistivity of the cement was influenced by the one reactive model parameter that represented the direct reaction of the liquid phase with the solid phase. The fracture behavior of smart cement was also evaluated using the electrical property monitoring tools in addition to the crack mouth opening displacement (CMOD) gauge. Also, Vipulanandan failure model for material was compared with Drucker–Prager criterion and verified with experimental results. In this study, well drilling and casing installation was investigated analytically using the new shear thinning rheological model. The analytical model predictions were compared to the Newtonian model. The Newtonian model over predicted the flow velocities and shear stress by 300%. Also, the analytical solutions were verified using numerical method. The effects of eccentricity for axial flow is investigated numerically while the eccentricity effect is analyzed analytically for the vortex flow. Later on, a new kinetic model has been developed assuming that the permeability and solid content during the filter cake is changing with time, temperature and pressure using Hyperbolic Model. The new kinetic model was verified with results from of fluid loss and compared with the API model. Also pumping of cement slurry during the well installation was investigated in terms of shear stress developed at the casing and geological formation interfaces. Three physical models simulating the cemented wells (including small model, large model and field model) were tested. During the test, the pressure applied inside the casing in small and large model test and the change of resistivity in smart cement were measured and p, q model was used to correlate the casing pressure to the cement resistivity changes. The numerical model analyzed to define the stresses and displacement along and around the wellbore and verified with the stress predicted from piezoresistivity effects. The smart cement will also predict the pressure inside the well.Item Analytical Models and Data-Driven Methods for Radiation Therapy Treatment Planning(2021-08) Ebrahimi, Saba; Lim, Gino J.; Lee, Taewoo; Lin, Ying; Mayerich, David; Cao, WenhuaThe clinical goal of radiation therapy (RT) is to maximize the tumor damage and kill all the cancerous cells while minimizing toxic effects on surrounding healthy tissues during the course of treatment. Adaptive radiation therapy (ART) has been widely used to adjust the radiation dose in response to potential changes in tumor volume during the treatment to reduce the radiation toxicity in healthy organs. One of the key challenges in ART is to determine the best time to adapt the plan in response to uncertain tumor biological responses to radiation during the treatment. Tumor biological response change dynamically over time and can be different from one patient to another. Therefore, considering tumor biological responses to radiation in ART treatment planning is challenging due to the high levels of uncertainty in biological factors. Determining the possibility of treatment side-effects for each patient before starting the treatment is another challenge in radiation therapy treatment planning. This dissertation focuses on a combination of optimization, deep learning, and statistical methods to address the aforementioned challenges in this field and improve the survival of cancer patients treated with radiation therapy. We will tackle this problem from two different perspectives: (1) developing effective personalized radiation therapy treatment plans and (2) predicting possible critical side-effects of the treatment for each patient before the treatment. First, we propose an automated radiation therapy treatment planning framework using reinforcement learning (RL) which incorporates uncertainty in tumor biological responses during treatment to find the optimal policy for ART. We also provide a novel tumor response model to estimate tumor volume changes and radiation responses during the treatment. This approach helps the decision-maker to control both biological and physical aspects of the treatment and achieve a robust solution under biological uncertainties without dealing with complex optimization models. The presented method provides much-needed flexibility in which a plan can be customized based on the patient case, cancer type, and the decision maker’s preference on treatment outcomes. Second, we address one of the critical radiation therapy treatment side-effects known as radiation-induced lymphopenia (RIL). RIL occurs due to a severe reduction in the ab solute lymphocyte count (ALC) after radiation exposure and can seriously affect patient survival. Therefore, we aim to assess the role of radiation therapy in ALC depletion to determine high-risk patients. To accomplish this goal, two mathematical models are proposed to approximate lymphocyte depletion based on radiation dose distributions and the ALC baseline for radiation therapy patients. Finally, we compare the potential post-treatment lymphocyte survival outcomes in cancer patients for photon and proton-based RT modalities. Third, we develop a hybrid deep learning model in a stacked structure to predict the ALC depletion trend throughout radiation therapy treatment for cancer patients based on the pretreatment clinical information. Then, we extend the model to account for making predictions after the initial phase of treatment (e.g., at the end of week 1). A discriminative kernel is also developed to extract and evaluate the importance of temporal features. The presented deep learning structure can efficiently use information from different groups of clinical features to predict ALC depletion without requiring a large amount of data to process too many features while reducing bias and generalization error. This approach helps the physicians to identify patients at risk of severe RIL who might benefit from modified treatment approaches which ultimately improve survival of the patients. In the last part of this dissertation, we provide an approach to estimate prediction intervals for ALC values. The proposed approach enables practical implications of predictive models in clinical decision-making by estimating the individualized predictive uncertain ties. Finally, a comprehensive hybrid decision-making framework is proposed to assess RIL risk for a given patient based on a given treatment plan and its predicted post-treatment lymphocyte survival outcome. This decision-making framework can be used as a guide for physicians to take advantage of advanced deep learning models and make appropriate decisions in selecting the safest treatment plan for an individual patient in clinics.Item Analytics Approaches to the Development of Diabetic Retinopathy Screening Policies(2023-08) Dorali, Poria; Lim, Gino J.; Lee, Taewoo; Lin, Ying; Weng, Christina Y.; Deshmukh, Ashish A.; Peng, JimingDiabetic retinopathy (DR) is the leading cause of blindness for working-age adults in the US. Over 60% of patients with type II diabetes and 90% of patients with type I diabetes develop DR within 20 years of diagnosis. Routine comprehensive screening examinations have proved effective in detecting early stages of DR and timely treatment can prevent up to 98% of DR-related vision loss. However, only 50-60% of diabetic patients adhere to the current annual screening guidelines. Recently, teleretinal imaging (TRI) has emerged as an accessible screening tool for patients with limited access. However, there exists no well-established guideline that incorporates TRI-based screening for such patients. In this thesis, we study a multi-pronged analytics approach to quantify and evaluate the advantages and limitations of TRI compared with traditional clinic-based screening (CS) and propose new screening policies for patients with limited access to eye care. First, we develop a simulation model that examines the health and cost benefits of various routine CS and TRI-based DR screening policies at different time intervals for various types of diabetic patients. Additionally, we identify patient subgroups who would truly benefit from TRI in terms of health benefits and cost savings. Second, we develop a partially observable Markov decision process (POMDP) model to generate personalized DR screening recommendations that exploit the dynamic interaction of TRI and traditional screening based on each patient’s unique health-related and behavioral factors. Lastly, we develop a decision tree model that establishes interpretable DR screening policies by transforming the complex, POMDP-driven personalized screening policies into policies that are more explainable, implementable, and adoptable in clinical practice.Item Asset Analytics of Smart Grid Infrastructure for Resiliency Enhancement(2015-05) Arab, Ali; Khator, Suresh K.; Han, Zhu; Lim, Gino J.; Tekin, Eylem; Khodaei, AminFirst, a post-hurricane restoration model for power grid which considers the economics of disaster is introduced. The physical and economic constraints of the system, including unit commitment and restoration constraints, are incorporated in the proposed model. The aim is to restore the hurricane-related damages to electric power system infrastructure in an economic and customer-centered manner, without violating the physics of the system, in order to mitigate the aftermath of natural disasters. Second, a proactive resource allocation model for repair and restoration of potential damages to the power system infrastructure located on the path of an upcoming hurricane is proposed. The objective is to develop an efficient framework for system operators to restore potential damages to power system components in a cost-effective manner. The problem is modeled as a two-stage stochastic integer program with recourse. This model can improve proactive preparedness of the decision makers to cope with emergencies, especially those of nature origins, in order to minimize the restoration cost, and enhance the resilience of the power system. Third, a model is proposed to incorporate the impact of potential damage due to hurricane in the maintenance scheduling of the power infrastructure components located in hurricane prone areas. The power infrastructure deterioration process, as well as two competing and independent failure modes, i.e., failure due to loss of reliability and failure due to hurricane damages are integrated into the model. Moreover, the interrelationship between the component, the grid, and the associated downtime cost dynamics are analyzed. The problem is modeled as a Markov decision process with perfect state information. Fourth, the impact of El Nino/La Nina phenomenon which has shown to induce seasonal effects on hurricane arrivals in long-term climatological horizon is considered in asset management strategies of the electric power systems. An integrated infrastructure hardening and condition-based maintenance scheduling model for critical components of the power systems is developed. The partially observable Markov decision processes are used to formulate the problem. The survival function against hurricane is derived as a dynamic stress-strength model, and is incorporated in the proposed framework.Item Behavior of Polymer Grouted Sand and Polymer Modified Smart Cement with Verification of New Failure Model for Concrete(2018-12) Krishnathasan, Mayooran; Vipulanandan, Cumaraswamy; Mo, Yi-Lung; Lim, Gino J.In this study acrylamide polymer was added to the smart cement and the rheological, mechanical and corrosion resistance properties of the resultant composites were studied. Also, the effect of acrylamide polymer modified grouts to further enhance the physical, mechanical and sensing properties from the time of mixing through pumping and final setting. Wet and dry cycle of acrylamide polymer grouted sand modified with algae was studied and the electrical sensing ability of composition was verified using the weight measurements. Splitting tensile strength of grouted sand prepared without and with algae increased by 118% and 130% to 180% respectively after one wet and dry cycle. The test results showed the moisture loss and chemical shrinkage of the smart cement was reduced with addition of 2.5% concentrated acrylamide polymer from 1.2% to 0.1% and from 4.6% to 0.4% respectively. Identified that the flowability of smart cement was reduced after high concentrated acrylamide polymer modification. Rheological properties of acrylamide polymer, smart cement and acrylamide polymer modified smart cement were modelled using Modified Bingham model, Herschel Bulkley model and Vipulanandan rheological model. Based on the coefficient of determination and root mean square error, Vipulanandan model predicted the test results well. Also, the investigation of rheology of polymer modified cement with UH biosurfactant showed that it improved the workability of acrylamide polymer modified cement and showed 16% increase in the maximum shear stress (τmax). API (30 minutes) fluid loss was 131.4 mL for smart cement with addition acrylamide polymer, the fluid loss was reduced to 77.3mL - 83.8 mL. Splitting tensile strength of smart cement was increased by 22% with the addition of 2% acrylamide polymer and the piezoresistivity at peak stress was 35% which reduced to 18% with the addition of 1.5% polymer. Based on over 300 uniaxial, biaxial and triaxial test data on plain concrete with uniaxial compressive strength in the range of 22 MPa to 70 MPa were used to verify the Vipulanandan generalized concrete failure model.Item CALIBRATION OF PHI FACTORS FOR PRESTRESSED BRIDGE GIRDERS(2014-12) Forouzannia, Faranak; Gencturk, Bora E.; Dawood, Mina; Belarbi, Abdeldjelil; Lim, Gino J.Calibration of the flexural resistance factors in the American Association of State Highway and Transportation Official’s (AASHTO) Load and Resistance Factor Design (LRFD) format is performed for bridge girders prestressed with Carbon Fiber-Reinforced Polymers (CFRP). The underlying principle of the LRFD design is to achieve a uniform probability of failure (target reliability) for all possible design scenarios, which is achieved through resistance and load factors. Calibration of the resistance factors requires an extensive design space to be applicable to different design scenarios. For this purpose, 12 design cases with various span lengths, girder positions, girder spacing, roadway widths, and failure modes were considered. The load and resistance model random variables and their statistics, flexural resistance model accuracy, and the results of Monte Carlo simulation through which resistance factors were derived for different target reliabilities for interior and exterior girders failing in tension and interior girders failing in compression are presented.Item Cardiovascular Disease Management Via Rule-based Personalized Lifestyle Recommendation(2023-05-08) Alnazzal, Thamer S.; Lin, Ying; Lim, Gino J.; Feng, Qianmei; Bian, ZheyongCardiovascular disease (CVD) is a major cause of death worldwide, and its onset is highly correlated with various predictors such as age, gender, and lifestyle. Several types of CVD are preventable by modifying lifestyle behaviors, but the existing guidelines on lifestyle modification were developed for the general population and have limited utility on individuals. Numerous machine learning models were developed for personalized lifestyle recommendation by predicting individual’s CVD risk from associated predictors and searching for the modifications on lifestyle predictors that maximally reduce the CVD risk. However, most machine learning models function as a black box where models predict and manage CVD without knowing the contribution of each predictor and how to interpret the causes of CVD. Recent advances in Rule-based machine learning models not only guarantee accurate stratification of individual risks but also enable automatic identification of interpretable risk predictive rules for describing the characteristics of different risk groups, thus holding great promise to inform policy design for clinical practice. However, the utility of Rule-based models on CVD risk prediction and personalized lifestyle recommendation has yet to be explored. Moreover, due to the complex interactions between lifestyle behaviors and other predictors, how to leverage the risk predictive rules for personalized lifestyle recommendation is a challenging problem. In this study, we are focusing on answering two main research questions. Firstly, we develop a Rule-based model to discover risk predictive rules associated with CVD, stratify individual risks and compare them with other machine learning models. Secondly, we developed a Rule-based personalized lifestyle recommendation algorithm to recommend the healthy lifestyle behaviors that help individual patients to decrease the risk of CVD. By applying the proposed methods on a national community study dataset, we demonstrate their effectiveness on CVD risk prediction, quality of the discovered rules and the efficiency of the recommended lifestyle modifications. The discovered rules hold great promise to advance our understanding of the pathology of CVD and allow for new guidelines to be developed for the lifestyle modification.Item Characterization of the Smart Oilwell Cement Modified with Metakaolin(2014-08) Khodaean, Seyed Amirhossein; Vipulanandan, Cumaraswamy; Mo, Yi-Lung; Lim, Gino J.For a successful cementing operation, it is critical to determine the flow of cement slurry between the casing and formation, the setting of cement in place and performance of the cement after hardening. At present there is no technology available to monitor cementing operations in real time from the time of placement to the end of the borehole service life. In this study well cement was modified using carbon fibers and other additives to give it better sensing properties, also known as smart cement, so that its behavior can be monitored during the cementing operation and its life time. The electrical resistivity was identified as the sensing property of the smart cement slurry and hardened cement. In this study up to 10% Metakaolin was used. Metakaolin increased the initial resistivity of cement by 25%, the piezoresistive behavior of the cement increased by about 60% and the cement compressive strength improved by about 20%. Contamination can be detected by means of the initial resistivity and its negative effect on the compressive strength and sensing properties was reduced by means of the addition of Metakaolin.Item Characterizing and Modeling of Ultra-Soft Clay Soil, Filter Cake and Drilling Mud(2015-12) Raheem, Aram Mohammed Raheem; Vipulanandan, Cumaraswamy; Rixey, William G.; Nakshatrala, Kalyana Babu; Lim, Gino J.; Khan, Shuhab D.In this study, ultra-soft soils representing the deepwater seabed offshore, coastal soils, and onshore soils with filter cakes and drilling muds, was characterized using new non-destructive in-situ test methods and modeling of the behavior. The new test methods to characterize the ultra-soft soils included the two-probe electrical method and CIGMAT miniature penetrometer. The clay content in the ultra-soft soils, filter cakes and drilling muds investigated in this study varied from 2% to 10% by weight. The type of clays investigated include montmorillonite (bentonite) and kaolinite. The shear strength of the ultra-soft soils varied from 0.01 kPa to 0.30 kPa using the modified vane shear test. Electrical characterization of the ultra-soft soils identified the soil to be a resistive material. Several modifiers such as lime, polymer, sand, and cement were used to treat the ultra-soft soils. The effect of the modifiers on the shear strength, electrical resistivity, water content, density, and electrical impedance were investigated. The shear strength of the treated ultra-soft soil had the highest value of 6.8 kPa, a change in shear strength of 2167%, with 10% polymer treatment. Electrical resistivity was correlated with the solid content, shear strength, and water content for treated and untreated ultra-soft soils. Experimental, analytical, statistical, and finite element methods were used to model the stress-strain relationship of the ultra-soft soils. Filter cake formation and fluid loss occur concurrently during various engineering operations including during oil well drilling is influenced by the seepage and consolidation of the cake. A new coupling continuous function with time and depth variables was developed to represent the combined seepage-consolidation phenomenons during the filter cake formation under different pressures and temperatures. The new continuous function solution was compared with Terzaghi discrete consolidation solution and both solutions were verified using several experimental results. Currently, filter cake is modeled using the API method where the cake properties are assumed to be constant but the cake thickness varies with time. In the new kinetic model developed in this study, variations of fluid loss, porosity, permeability, relative solid content, and cake thickness with time have been included. The new kinetic model also takes into account the effects of both high pressure and high temperature. Also, the new kinetic model has a limit on the maximum amount of the fluid loss, however, the API method predicts the maximum fluid loss to be infinity. The prediction for both API and new kinetic models were verified using several high pressure and high temperature test results from the current study and reported literature. Drilling mud rheological behavior with and without contamination was investigated under different temperatures using the Herschel-Bulkley and hyperbolic models. Nonlinear models were used to investigate the combined effects of bentonite and salt contamination, and the changes in the temperature on the fundamental properties of the drilling mud such as yield and maximum shear stress, electrical resistivity and other hyperbolic model parameters. Nonlinear model showed that the bentonite content in the drilling mud had the highest effect in decreasing the electrical resistivity, yield and maximum shear stresses compared to salt contamination and temperature in the range of studied variables.Item Characterizing and Modeling the Dynamic Responses, Gas Leakage and Contaminations on the Behavior of the Smart Cement Composite(2017-12) Amani, Niousha; Vipulanandan, Cumaraswamy; Nakshatrala, Kalyana Babu; Mo, Yi-Lung; Chen, Yuhua; Lim, Gino J.Cement composites are one of the most durable construction materials, which can be used in different structures. Monitoring the behavior of the cementitious structures is critical during the construction and the entire service life in order to certify the integrity and safety of the structures. This study provides a systematic dynamic characterization of smart cement composites and monitoring gas leakage and contamination in smart oil well cement using electrical measurements. Smart cement composites were developed with up to 75% gravel, 10% hydrophilic polymer resin and few other additives. Investigation of the electrical impedance versus frequency relationship indicated that the smart cement composites can be represented by resistance. Addition of coarse aggregates and hydrophilic polymer resin (HPR) increased the initial electrical resistivity of the smart cement composite as well as long term electrical resistivity. The initial electrical resistivity of smart cement was 1.02 Ω.m which increased nonlinearly to 3.74 Ω.m and 10.55 Ω.m with addition of 75% gravel and 10% HPR respectively. HPR leads to gain zero fluid loss in smart cement composites which is due to polymerized texture of the smart cement composite. The compressible and incompressible fluid flow through porous media has been studied. Vipulanandan fluid flow Model and other models including modified Darcy’ lawwere used to characterize the flow of gas in sand and cement porous media. Experimental results showed that the increasing rate of gas discharge from porous media is not linear with pressure gradient which is due to change in both fluid and porous media properties with pressure. This trend was predicted using the Vipulanandan fluid flow model. Electrical resistivity of the smart cement was measured with nitrogen gas migration under different pressures. The electrical resistivity of the 6 hours cured smart cement decreased by 12% under Nitrogen migration with a pressure of 2 MPa (300 psi). Moreover, the sensitivity of electrical measurements in monitoring the dynamic conditions on smart cement composites such as impact, cyclic loading and cyclic temperature was investigated. Impact loading leads to have up to 2% increment in electrical resistivity for 28 days cured smart cement with resonance frequency of 7.3 Hz. Cyclic loading led to increment in electrical resistivity which is dependable on the displacement rate as well as the ultimate pressure. At a compression stress of 1.03 MPa (150 psi), the change in electrical resistivity is 19.45%, 14.56% and 9.9% respectively for 0.008, 0.016 and 0.032 displacement rates. Temperature changes can change the electrical resistivity of the smart cement. Increasing the temperature from 60° C to 120° C decreased the resistivityby 37% from 12.38 Ω.m to 7.83 Ω.m. Decreasing the temperature from 0° C to -20° C caused an increment of 1692% from 20.19 Ω.m to 361.91 Ω.m. based on experimental studies, a new model has been proposed correlating the rate of change in electrical resistivity, temperature gradient and temperature rate. Finally, contamination and degradation of smart cement with drilling mud or CO2 exposure was studied and the sensitivity of the electrical measurements on monitoring any kinds of contamination on smart cement composites was investigated. OBM Contamination filled up the pores of loose net structure around the cement particles which resulted in increasing in rheological properties of cement slurry. It reduced the development of the resistivity during 28 days of curing due to its hindering effect on the hydration process which caused less production of C-S-H after 28 days. 0.1% and 3% of OBM contamination reduced the resistivity of the cement by 22% and 42% to 9.5 Ω.m and 7 Ω.m respectively. Studies indicated that one of the most significant leakage mechanisms is likely to be flow path of CO2 along cement which can cause cement degradation. CO2 exposure reduced the development of the resistivity during 28 days of curing. 0.1%, 1% and 3% of CO2 concentrated water reduced the resistivity of the cement by 21%, 34% and 38% to 13.4 Ω.m, 11.3 Ω.m and 10.5 Ω.m respectively after 28 days of curing. Vipulanandan p-q model was used to predict the composites’ curing, piezoresistivity behavior, resistivity of the mixtures and pulse velocity variation with curing time.Item Characterizing and Modeling Wood and Smart Cement with Additives for Real Time Moisture Detection(2020-08) Bhatia, Shivam; Vipulanandan, Cumaraswamy; Mo, Yi-Lung; Lim, Gino J.In this study electrically characterizing the changes in wood (organic) and smart cement (inorganic) due to moisture changes was investigated using 2-probe method. Also, Ultrasonic Pulse Velocity was used to investigate the changes in the compressive wave speed with changes in moisture content. Smart cement was modified by adding UH-biosurfactant and characterized the changes in the initial resistivity, curing characteristics and the piezoresistive behavior. Also, smart cement was exposed to different water levels (external) and changes in the resistivity were correlated with moisture content. The experimental results were correlated with Vicat Appartus tests and were modeled using Vipulanandan models and Artificial Neural Network (ANN) models. Wood, one of the most commonly used natural materials was studied for variable moisture saturation conditions and electrical measurements were then recorded to monitor and characterize the changes. Also, Ultrasonic Pulse Velocity test, one of the most established and widely used non-destructive test (NDT) was used to correlate with the moisture changes and resistivity in the wood.Item Characterizing Polymer-Treated Field Clays and Smart Cement-Clay Interaction(2017-12) Gattu, Vikhyath-Kumar; Vipulanandan, Cumaraswamy; Mo, Yi-Lung; Lim, Gino J.To ensure the stability and durability of infrastructure, the problems due to expansive clays need to be addressed with a high priority of concern. The distress caused by the expansive and shrinkage behavior of clays, could result in a massive rehabilitation of damaged roads, residential and commercial buildings. The cost of these projects scales up with the type of stabilizers, lack of proper quality control i.e., excess or inadequate usage of stabilizers, time and labor. In the current market, there are very few methods which focus on treatment of moist soils and ensure a proper quality control coupled with an economic method of soil stabilization. In this study, soil borings under the highway near William P. Hobby Airport, Houston, TX, were extracted and characterized to be expansive in nature. 18 field soils, varying liquid limit from 50%-90%, were studied and treated using 2.25%, 4.5%, 6% and 9% of polyacrylamide, out of which 4 soils (WL = 54%, 62%, 72% and 88%) were studied in detail with 4.5% of polyacrylamide. To ensure the adequacy of treatment, commercial clays i.e. kaolinite and bentonite with a liquid limit of 720% and 50% were treated and studied. The liquid limit had an average decrease of 22%, 29%, 29.5% and 30% for 2.25%, 4.5%, 6% and 9% respectively. The bentonite showed a decrease in liquid limit from 720% to 510% and 483 % for dosages of 2.25% and 4.5% pure polymer respectively. The plastic limit had an average increase of 5.1%, 9.1%, 9.2% and 9.1% for 2.25%, 4.5%, 6% and 9% respectively. The plastic limit of bentonite increased from 78% to 90% and 110% for 2.25% and 4.5% of polymer respectively. The soils with a liquid limit of 54%, 62%, 72% and 88% had a pH of 6.77, 6.67, 6.55 and 6.5 respectively. After the dosage of 4.5% pure polymer, the pH of the 4 soils reduced to 5.7. The OMC of the CH soils with a liquid limit of 54%, 62%, 72% and 88% increased from 15% to 22%, 17% to 22%, 20% to 22% and 22% to 23% respectively after 4.5% pure polymer treatment. vii The maximum dry density reduced from 1.6 g/cm3 to 1.52 g/cm3 , 1.59 g/cm3 to 1.5 g/cm3 , 1.56 g/cm3 to 1.48 g/cm3 and 1.54 g/cm3 to 1.42 g/cm3 respectively after 4.5% pure polymer treatment. After 4.5% pure polymer treatment, the maximum value of modified expansion index (MEI) of the 4 CH soils (WL = 54%, 62%, 72% and 88%) reduced from 6 to 0.64, 6.7 to 1.6, 14.7 to 2 and 16.8 to 2.6 respectively. A majority, of soils have been reduced into the MI or OI region of classification, after the polymer treatment. The above phenomena (reducing from CH to MI or OI) showed a positive trend in the treatment with 2.25%, 4.5%, 6% and 9% pure polymer treatment. The change in electrical resistivity after 4.5% polymer treatment reduced with the expansion index of the soil, predicted using the power model, with a correlation coefficient of 0.6. The CH soil exhibited a Case 2 behavior, representing a pure resistive behavior at high frequencies (≥300 KHz ). Additionally, this study presents methods which could reduce the shrinkage behavior of class H cement. Smart Cement (Conductive filler = 0.01%, W/C = 0.38) was substituted with 2%, 4% and 6% bentonite clay by its weight to counter the shrinkage and reduced the shrinkage by 44.6%, 74.5%, 84.8% respectively, in the first 24 hours of curing. The curing and piezoresistive behavior of smart cement-bentonite mix was found to be a non-linear relationship. The piezoresistivity of the control sample, 2% bentonite, 4% bentonite and 6% bentonite was found to be 67.4% and 62.9%, 49.1% and 46.4% respectively. The average sensitivity of these samples was 6.8 %/MPa, 9.8 %/MPa, 12.8 %/MPa and 12.6 %/MPa respectively, showing an inverse relationship with the piezoresistive behavior.Item Computational Methods for Multi-Scale Temporal Problems: Algorithms, Analysis, and Numerical Experiments(2016-12) Karimi, Saeid; Nakshatrala, Kalyana Babu; Wang, Keh-Han; Willam, Kaspar J.; Lim, Gino J.; Kulkarni, YashashreeA major challenge in numerical simulation of most natural phenomena is the presence of disparate temporal and spatial scales. Capturing all the fine features can be computationally prohibitive. Hence, development of efficient and accurate multi-scale numerical algorithms has gained immense attention from engineers and scientists. Typically, a single numerical method cannot efficiently capture all the aforementioned features. Due to the assumptions made in construction of numerical methods and mathematical models, the range of applicability to various length and time-scales is often limited. A direction in resolving this issue is to apply different numerical methods in different regions of the computational domain. This strategy enables computation of necessary details as desired by the user. In this work, we propose numerical methodologies based on domain partitioning techniques that allow different time-steps and time-integrators in different regions of the computational domain. The first problem of interest is elastodynamics, which can pose various temporal scales in impact, contact and wave propagation problems. A monolithic (strong) coupling algorithm based on non-overlapping domain partitioning is proposed. The proposed algorithm is based on the theory of differential/algebraic equations and its numerical stability, energy conservation and accuracy is studied in detail. Following these findings, we extend this algorithm to advection-diffusion-reaction problems. The proposed algorithm proves useful especially in cases where the relative strength of the involved processes changes dramatically with respect to spatial coordinates. Numerical stability and accuracy of this method is studied and its application to fast bimolecular chemical reactions is showcased. Further on, we confine our attention to single and multiple-relaxation-time lattice Boltzmann methods for the advection-diffusion equation and study their performance in preserving the maximum principle and the non-negative constraint. Finally, a computational framework based on overlapping domain decomposition methods is proposed. This framework is designed for advection-diffusion problems and allows coupling of the finite element method and lattice Boltzmann methods with different time-steps and grid sizes. Additionally, a new method for enforcing the Dirichlet and Neumann boundary conditions on the numerical solution from the lattice Boltzmann method is proposed. This method is based on maximization of entropy and ensures non-negativity of the discrete distributions on the boundary of the domain. We study the performance of this framework through numerical experiments and showcase its application to fast and equilibrium chemical reactions.Item Data-driven Inverse Linear Programming: Integrating Inverse Optimization and Machine Learning(2021-08) Shahmoradi, Zahed; Lee, Taewoo; Lim, Gino J.; Lin, Ying; Mang, Andreas; Huchette, JoeyInverse linear programming (LP) has received increasing attention due to its potential to infer efficient optimization formulations that can closely replicate the behavior of a complex system. In this thesis, we integrate data-driven inverse optimization techniques with ideas from statistical machine learning to improve the stability and applicability of inverse optimization in settings that involve imperfect data and observations from decision makers with different preferences. In Chapter 1 of this thesis, we provide background information about inverse optimization and discuss its conceptual similarities with problems in machine learning. In Chapter 2, we review relevant literature on inverse optimization and then present the contribution of this thesis to the literature. In Chapter 3, we first discuss the sensitivity of the inversely inferred parameters and corresponding forward solutions from the existing inverse LP methods to noise, errors, and uncertainty in the input data, and present illustrative examples highlighting their limited applicability in data-driven settings. Then, we introduce the notion of inverse and forward stability in inverse LP and borrow ideas from quantile regression to propose a novel inverse LP method that determines a set of objective functions that are stable under data imperfection and generate forward solutions close to the relevant subset of the data. We formulate the inverse model as a tractable large-scale mixed-integer program (MIP), called mixed-integer quantile inverse optimization (MQIO), and apply it to diet survey data to recommend diets that are consistent with the individual's food preferences. In Chapter 4, we analyze the complexity of the large-scale MQIO formulation from Chapter 3 and elucidate its connection to biclique problems, which we exploit to develop an exact algorithm and heuristics that solve much smaller MIPs instead to construct a solution to the original problem. We then numerically evaluate the performance of the proposed algorithms on randomly generated instances. Lastly, we show that a modified version of the proposed heuristics can accommodate online settings and demonstrate them in the diet recommendation and transshipment applications. Finally, in Chapter 5 we integrate inverse optimization and clustering and propose a new clustering approach, called optimality-based clustering, which clusters the data points based on their encoded decision preferences. We assume that each data point is a decision made by a rational decision maker (i.e., by approximately solving an optimization problem), and cluster the data points by identifying a common objective function of the optimization problems for each cluster such that the worst-case optimality gap for the data points within each cluster is minimized. We propose three clustering models and present tractable MIP formulations that lead to lower and upper bound solutions. We demonstrate these clustering models using various-sized randomly generated instances.Item Deployment Optimization of Drone Base Stations in Cellular Networks(2022-08-15) Soltani, Sepehr; Lim, Gino J.; Lin, Ying; Vipulanandan, CumaraswamyUAV-aided cellular coverage is anticipated to be an important part of future cellular networks due to a high demand of data and societies being more dependent on fast and reliable wireless internet coverage. However, there are still several concerns regarding operating a swarm of drones that needs to be addressed. A vital role in determining cost and risks of an operation is the specified number of drones and their locations. In this thesis, a framework, called Drone Location Problem (DLP), is proposed to optimize deployment of multiple drone base stations (DBS) for covering users. This framework includes two main stages in which the number of drones and their locations are optimized, respectively. DBSP-I is the first stage which is circle placement problem and DBSP-II is the second stage which is smallest enclosing circle problem. Since DBSP-I is a complex problem and it may require a long time to be solved optimally when the number of users are large, two heuristics are proposed to produce similar results in shorter time. These two heuristics are called KMDLP-I and KMDLP-II, respectively. KMDLP-I is uses k-means clustering algorithm to deploy drones for area coverage operation. However, since clustering algorithms suffer from local minima problems which is dependent on how clusters are initialized, two initialization methods were tested, k-means ++ and random. Moreover, KMDLP-II was developed which uses a bottom-up approach to improve the result obtained by KMDLP-I. Two classes of instances were created in order to investigate the effect of distribution of users in the region on execution time and number of deployed drones by heuristics. In the first class, users were distributed in a way to represent scenarios such as sports events and gatherings where they form clusters while in the second class of instances, users were distributed uniformly. Comprehensive experiments on instances with different numbers of users demonstrate that although DLP can find the optimal number of drones and their locations for instances with a up to 25 and 150 users for the first and second class of instances, it requires several hours to find an optimal solution when number of users increases and may not converge to optimality within 2 hours. Heuristics are shown to be more effective in terms of execution time and are able to solve instances with more than 120 users in a few seconds; however, the results obtained by them are not always the same as DLP. KMDLP-I is shown to perform very well on the first class of instances and produced results similar DLP. However, as the number of uniformly distributed users increases, the gap between number of deployed drones by DLP, and KMDLP-I increases. Experiments demonstrate that this problem can be mitigated and the gap between KMDLP I and DLP can be reduced when k-means ++ is used as it outperforms the random initialization methods in term of deploying a smaller number of drones on average. My experiments show that KMDLP-II methods improve the results even further by 10% compared to KMDLP-I. Also, they suggest that KMDLP-I or KMDLP-II can be used at the cost of utilizing more drones when it is critical to find a solution very fast like during a search and rescue mission or post disaster network restoration. On other hand, if enough time is available in advance to plan ahead for an operation, DLP framework should be used since it can find optimal number of drones and their location for providing cellular coverage regardless of how many users are in the region and how they are distributed.