Browsing by Author "Bodmann, Bernhard G."
Now showing 1 - 20 of 27
- Results Per Page
- Sort Options
Item Binary Frames, Codes and Euclidean Embeddings(2018-12) Mendez, Robert Paul 1974-; Bodmann, Bernhard G.; Kalantar, Mehrdad; Labate, Demetrio; Ward, Rachel A.This dissertation has two parts. The first part is concerned with using Euclidean embeddings and random hyperplane tessellations to construct binary block codes. The construction proceeds in two stages. First, an auxiliary ternary code is chosen which consists of vectors in the union of coordinate subspaces. The subspaces are selected so that any two vectors of different support have a sufficiently large distance. In addition, any two ternary vectors from the auxiliary codebook with common support are at a guaranteed minimum distance. In the second stage, the auxiliary ternary code is converted to a binary code by an additional random hyperplane tessellation. The second part of this dissertation is dedicated to Binary Parseval frames, which share many structural properties with real and complex ones. On the other hand, there are subtle differences, for example that the Gramian of a binary Parseval frame is characterized as a symmetric idempotent whose range contains at least one odd vector. Here, we study binary Parseval frames obtained from the orbit of a vector under a group representation, in short, binary Parseval group frames. In this case, the Gramian of the frame is in the algebra generated by the right regular representation. We identify equivalence classes of such Parseval frames with binary functions on the group that satisfy a convolution identity. This allows us to find structural constraints for such frames. We use these constraints to catalogue equivalence classes of binary Parseval frames obtained from group representations. As an application, we study the performance of binary Parseval frames generated with abelian groups for purposes of error correction. We show that if p is an odd prime, then the group Zq/p is always preferable to Zq/p when searching for best performing codes associated with binary Parseval group frames.Item Compactly Supported Frame Wavelets and Applications(2019-08) Karantzas, Nikolaos 1986-; Papadakis, Emanuel I.; Labate, Demetrio; Bodmann, Bernhard G.; Prasad, SaurabhSignal processing has been at the forefront of modern information technology as the need for storing, analyzing, and interpreting data gathered all around us is ever growing. Multi-dimensional sparse signal representations occupy a significant part of the literature on multi-scale decompositions. The interest in such representations arises from their ability to analyze, synthesize, and modify signals carrying information about the behavior of specific phenomena. This work is devoted to the development and design of application-targeted tools for the multi-variable analysis of image data. Our main interests revolve around both the theoretical and practical aspects of signal processing, machine learning, and deep neural networks. In Chapter $1$ we present the necessary mathematical background this work is based on. In Chapter $2$ we develop a theoretical base for the construction of a specific class of compactly supported Parseval Framelets with directional characteristics. The framelets we construct arise from readily available refinable functions and their filters have few non-zero coefficients, custom-selected orientations and can act as finite-difference operators. We present explicit examples related to well-known directional representations (directional filter banks). Finally, in Chapter $3$ we explore the capabilities of our construction in the growing field of deep convolutional neural networks.Item Correlation Minimizing Frames(2016-05) Leonhard, Nicole 1981-; Paulsen, Vern I.; Bodmann, Bernhard G.; Labate, Demetrio; Casazza, Peter G.In this dissertation, we study the structure of correlation minimizing frames. A correlation minimizing (N,d)-frame is any uniform Parseval frame of N vectors in dimension, d, such that the largest absolute value of the inner products of any pair of vectors is as small as possible. We call this value the correlation constant. These frames are important as they are optimal for the 2-erasures problem. We produce the actual correlation minimizing frames. To further study the structure of correlation minimizing frames, we obtain upper bounds on the correlation constant. In the real case, we find an upper bound on the correlation constant of a correlation minimizing (N,d)-frame. As a result, we prove the correlation constant goes to zero for fixed redundancy as the dimension and number of vectors increases proportionally by 2^k. When addressing the correlation constant for complex correlation minimizing (N,d)-frames, we consider circulant matrices which are also projections as the Grammian matrix of a uniform Parseval frame. We derive a relationship between these Grammian matrices and the Dirichelet kernel as well as the structure of quadratic residue. Utilizing these relationships, we obtain two upper bounds on the correlation constant. Furthermore, we investigate how the correlation constant behaves asymptotically in comparison to the Welch bound. In L^2[0, 1], the Laurent matrix is a projection defined by the Fourier transform of the characteristic function on an interval of fixed finite length in [0,1]. Considering the magnitude of the Fourier transform of the characteristic function on a set of sufficiently small size, we derive a bound on the correlation constant and construct a method to create a correlation constant that is arbitrarily small.Item DIRECTIONAL MULTISCALE ANALYSIS USING SHEARLET THEORY AND APPLICATIONS(2012-08) Negi, Pooran 1978-; Labate, Demetrio; Papadakis, Emanuel I.; Bodmann, Bernhard G.; Azencott, Robert; Prasad, SaurabhShearlets emerged in recent years in applied harmonic analysis as a general framework to provide sparse representations of multidimensional data. This construction was motivated by the need to provide more efficient algorithms for data analysis and processing, overcoming the limitations of traditional multiscale methods. Particularly, shearlets have proved to be very effective in handling directional features compared to ideas based on separable extension, used in multi-dimensional Fourier and wavelet analysis. In order to efficiently deal with the edges and the other directionally sensitive (anisotropic) information, the analyzing shearlet elements are defined not only at various locations and scales but also at various orientations. Many important results about the theory and applications of shearlets have been derived during the past 5 years. Yet, there is a need to extend this approach and its applications to higher dimensions, especially 3D, where important problems such as video processing and analysis of biological data in native resolution require the use of 3D representations. The focus of this thesis is the study of shearlet representations in 3D, including their numerical implementation and application to problems of data denoising and enhancement. Compared to other competing methods like 3D curvelet and surfacelet, our numerical experiments show better Peak Signal to Noise Ratio (abbreviated as PSNR) and visual quality. In addition, to further explore the ability of shearlets to provide an ideal framework for sparse data representations, we have introduced and analyzed a new class of smoothness spaces associated with the shearlet decomposition and their relationship with Besov and curvelet spaces. Smoothness spaces associated to a multi-scale representation system are important for analysis and design of better image processing algorithms.Item Fracture Characterization at the Dickman Field, KS: Integrating Well Log and Prestack Seismic Analyses(2012-08) Brown, Timothy 1985-; Liner, Christopher L.; Van Wijk, Jolante W.; Bodmann, Bernhard G.Dickman Field, located in Ness County, Kansas, has produced 1.7 million barrels of oil since 1962 and is presently being evaluated by the University of Houston as a potential CO2 sequestration locality. The primary injection target is a porous, brine-saturated, Mississippian carbonate unit set approximately -2000 ft (-610 m) subsea. The objective of this study is to characterize sub-vertical fracture networks that potentially favor mobility of free-state CO2 within the reservoir. The 6 Hz results from a narrow-band decomposition of the Dickman 3D broadband volume show NW and NE striking lineaments in the reservoir interval. These spectral anomalies were originally assumed to be evidence of sub-resolution fracturing. Testing the validity of these features was accomplished by analyzing two kinds of data: 1) digital well logs from nearby wells and 2) available prestack seismic data from the Dickman 3D survey. A fuzzy inference system was used to obtain ground-truth fracture information from conventional well logs. Results show probable indicators of crosscutting fractures in the Mississippian section. Prestack analysis was used to detect azimuthal variations in the reflectivity gradient using amplitudes picked from the Gilmore City horizon. Azimuthal anisotropy orientations agree with the 6 Hz features as well as with lineament orientations found in previous seismic attribute studies. The 6 Hz anomalies, although supported by geological and geophysical evidence in terms of orientation, are most-likely products of low-frequency noise found in the upper 0.2 seconds of the Dickman 3D.Item FRAMES AS CODES FOR STRUCTURED ERASURES(2012-12) Singh, Pankaj K. 1979-; Bodmann, Bernhard G.; Casazza, Peter G.; Labate, Demetrio; Paulsen, Vern I.This dissertation studies the role of frames as codes. Frames are families of vectors that give rise to embeddings of Hilbert spaces. These embeddings can be interpreted as codes, because possible linear dependencies among frame vectors can be used to recover missing components in the embedded data, so-called erasures. This dissertation is dedicated to structured erasures. One type of structured erasure occurs when consecutive frame coefficients are lost due to the occurrence of random burst errors. Assuming that the distribution of bursts is invariant under cyclic shifts and that the burst-length statistics are known, we wish to find frames of a given size, which minimize the mean-square reconstruction error for the encoding of vectors in a complex finite-dimensional Hilbert space. We derive statistical error bounds for a given Parseval frame and relate them to its generalized frame potential. In the case of cyclic Parseval frames, we find a family of frames which minimizes the upper bound. Under certain conditions, these minimizers are identical to complex Bose- Chaudhuri-Hocquenghem(BCH) codes discussed in the literature. The accuracy of our upper bounds for the mean-square error is substantiated by complementary lower bounds. Another part of the dissertation concerns the transmission of digital media, typically following a protocol that splits data into a number of packets having a fixed size. When such packets are sent over a network such as the Internet, there is in principle no guarantee of reliability, that is, the contents of each packet may become corrupted in the course of transmission or entire packets may be lost due to buffer overflows. We assume that during the transmission, only a few of these packets are corrupted or lost. In this part of the dissertation we adapt ideas by Candes and Tao in order to construct frames as codes for such erasures. The frames are associated with consistency checks for the data that are obtained from random matrices whose entries are independent realizations of a Gaussian random variable. In addition to the random Gaussian matrices, we use random projections to achieve recovery based on a low dimensional check-sum measurement. We use a generalized technique of l_1 minimization to reconstruct the error vector from these measurements.Item From Generalized Fourier Transforms to Coupled Supersymmetry(2017-05) Williams, Cameron Louis 1990-; Bodmann, Bernhard G.; Kouri, Donald J.; Kalantar, Mehrdad; Labate, Demetrio; Klauder, John R.The Fourier transform, the quantum mechanical harmonic oscillator, and supersymmetric quantum mechanics are well-studied objects in mathematics. The relations between them are also well-understood, though the traditional understanding of each has led to a rigidity in their design. In this thesis, we will develop generalizations of the Fourier transform and explore their relations. Exploring the analytic structure of these integral transforms, particularly their related Hamiltonians, naturally leads to Hamiltonian-like operators which have rich analytic and algebraic structures. The algebraic structure underlying the Hamiltonian-like operators associated to generalized Fourier transforms suggests a much-more-general abstract formulation. To this end, we introduce the coupled supersymmetry (coupled SUSY) algebraic framework which unifies the quantum mechanical harmonic oscillator and supersymmetric quantum mechanics in a more complete way. The coupled SUSY framework subsumes the quantum mechanical harmonic oscillator and provides a broader class of systems which have rich functional analytic and algebraic structures. In this setting, one is able to develop further generalizations of the Fourier transform. A further generalization of coupled SUSY is briefly presented which appears to be a very new insight into supersymmetry.Item Gaussian Polynomial Filters and Generalized Shift-Invariant Frames(2015-12) Maxwell, Nicholas 1985-; Bodmann, Bernhard G.; Kouri, Donald J.; Papadakis, Emanuel I.; Labate, Demetrio; Nammour, RamiWe present and study a family of filters on $L^2(\mathbb{R}^d)$ consisting of Gaussian polynomials. That is, multipliers in the frequency domain that are products of polynomials and Gaussians. These filters are constructed to approximate the characteristic functions of fairly general sets in $\mathbb{R}^d$, in an almost-uniform sense. We also study generalized shift-invariant (GSI) frames for $L^2(\mathbb{R}^d)$. These are frames consisting of regular lattice translations of countably many functions, which we call generators. GSI frames are fundamental to sampling theory and many areas of applied mathematics and engineering, in particular, signal and image analysis. Their distinguishing feature is an accommodation for generators which may be un- related to one another, and for general lattices of translations, which may vary with the generators. GSI frames generalize systems such as wavelets, Gabor systems, shearlets, curvelets, filter banks, etc. We discuss very general conditions on the generators under which one can determine lattice spacings, or sampling rates, so as to meet the frame condition. We develop a fast and numerically stable method for inverting the frame operator, and we give a detailed analysis of this method, as well as of the fast numerical implementation of the synthesis and analysis operators associated with GSI frames. We give a careful analysis of two methods for obtaining approximate dual GSI frames for general GSI frames. We apply this GSI system framework to the the Gaussian polynomial filters developed in this dissertation to obtain frames of translated Gaussian polynomials.Item Geometric Conditions for the Recovery of Sparse Signals on Graphs from Measurements Generated with Heat Kernels(2023-08) May, Jennifer J.; Bodmann, Bernhard G.; Labate, Demetrio; Mang, Andreas; Bittner, Eric R.This dissertation establishes results on signal recovery for graphs, when the signals are functions with small support and what is observed is a noisy version of the signal smoothed by evolving it under the heat equation governed by the graph Laplacian. The results discussed here are in close analogy to the mathematical theory of super-resolution developed by Cand`es and Fernandez-Granda for finitely supported measures on Euclidean spaces. As in the Euclidean case, recovery guarantees depend on the size of the support, a distance separation for elements in the support and a time limit for the heat kernels appearing in the measured signal. This dissertation includes a comparison of linear recovery strategies, results of an exhaustive search, and a concrete recovery algorithm. The main emphasis is on the accuracy of estimates for noisy signal recovery based on minimizing the 1-norm, which is a strategy central to compressed sensing. In contrast to the Euclidean case, the graph setting does not have a straightforward implementation of a Fourier transform with its convenient properties. Instead, the results discussed here depend on bounds for the heat kernel and diagonally dominant matrices. The combination of 1-norm minimization with conditions for the existence of a dual certificate offers the widest range of validity in the interplay between sparsity, separation and time limit for the heat kernels that permit noisy recovery guarantees.Item Graph Parameters via Operator Systems(2015-12) Ortiz Marrero, Carlos M. 1989-; Paulsen, Vern I.; Tomforde, Mark; Bodmann, Bernhard G.; Dykema, Ken J.This work is an attempt to bridge the gap between the theory of operator systems and various aspects of graph theory. We start by showing that two graphs are isomorphic if and only if their corresponding operator systems are isomorphic with respect to their order structure. This means that the study of graphs is equivalent to the study of these special operator systems up to the natural notion of isomorphism in their category. We then define a new family of graph theory parameters using this identification. It turns out that these parameters share a lot in common with the Lov\'{a}sz theta function, in particular we can write down explicitly how to compute them via a semidefinte program. Moreover, we explore a particular parameter in this family and establish a sandwich theorem that holds for some graphs. Next, we move on to explore the concept of a graph homomorphism through the lens of C$^*$-algebras and operator systems. We start by studying the various notions of a quantum graph homomorphism and examine how they are related to each other. We then define and study a C$^*$-algebra that encodes all the information about these homomorphisms and establish a connection between computational complexity and the representation of these algebras. We use this C$^*$-algebra to define a new quantum chromatic number and establish some basic properties of this number. We then suggest a way of studying these quantum graph homomorphisms using certain completely positive maps and describe their structure. Finally, we use these completely positive maps to define the notion of a ``quantum" core of a graph.Item Hermite-Gauss Quadrature with Generalized Hermite Weight Functions and Small Sample Sets for Sparse Polynomials(2020-04) Vu, Brian-Tinh D.This thesis derives a Gaussian quadrature rule from a complete set of orthogonal lacunary polynomials. The resulting quadrature formula is exact for polynomials whose even part skips powers, with a set of sample values that is much smaller than the degree. The weight for these quadratures is a generalized Gaussian, whose negative logarithm is an even monomial; the powers of this monomial make up the even part of the polynomial to be integrated. We first present Rodrigues formulas for generalized Hermite polynomials (GHPs) that are complete and orthogonal with respect to the generalized Gaussian. From the Rodrigues formula for even GHPs we establish a three-term recursion relation and find the normalization constants. We present a slight modification to the Christoffel-Darboux identity and the Lagrange interpolation polynomials, and proceed to derive the roots, weights, and estimate of the error for the generalized Hermite-Gauss quadrature rule applied to sufficiently smooth functions. We illustrate the quadrature rule by applying it to two examples. Finally, we apply a major result from compressive sensing relating a matrix's coherence and sparse recovery guarantees to the quadrature setting.Item Image Analysis Using Directional Multiscale Representations and Applications for Characterization of Neuronal Morphology(2015-12) Ozcan, Burcin 1987-; Papadakis, Emanuel I.; Labate, Demetrio; Bodmann, Bernhard G.; Laezza, FernandaRecent advances in high-resolution fluorescence microscopy have enabled the system- atic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, quantifi- cation and analysis of morphological features are for the vast majority processed manually, slowing data processing significantly and limiting the information gained to a descriptive level. As an example, automated identification of the primary components of a neuron and extraction of its features are essential steps in many quantitative studies of neuronal net- works. Recent advances in applied harmonic analysis, especially in the area of multiscale representations, offer a variety of techniques and ideas which have potential to impact this field of scientific investigation. Motivated by the properties of directional multiscale rep- resentations, the focus of this thesis is to introduce a new notion, directional ratio, which is a multiscale quantitative measure, capable of distinguishing isotropic from anisotropic structures and the characterization of local isotropy. Another part of the dissertation illustrates the application of directional ratio. In partic- ular, we present an algorithm for automated soma extraction and separation of contiguous somas. Our numerical experiments show that this approach is reliable and efficient to detect and segment somas.Item Integral Transforms, Anomalous Diffusion, and the Central Limit Theorem(2017-10-12) Pandya, Nikhil N.; Williams, Cameron L.; Bodmann, Bernhard G.; Yao, JieWe present new connections among anomalous diffusion (AD), normal diffusion (ND) and the Central Limit Theorem (CLT). This is done by defining new canonical Cartesian-like position and Cartesian-like momentum variables and canonically quantizing these according to Dirac to define generalized negative semi-definite and selfadjoint Laplacian operators. These lead to new generalized Fourier transformations (GFT) and associated generalized probability distributions, which are form invariant under the corresponding transform. The new Laplacians also lead us to postulate generalized diffusion equations (GDE), which imply a connection to the CLT. We show that the derived diffusion equations have the O’Shaughnessy-Procaccia equations (OPE) as a special case. We also show that AD in the original, physical position is actually ND when viewed in terms of displacements in an appropriately transformed position variable. These tools allow us to prove the CLT for this class of diffusion equations.Item Inverse acoustic scattering series using the volterra renormalization of the Lippman-Schwinger equation in one dimension(2013-08) Yao, Jie 1989-; Kouri, Donald J.; Hussain, Fazle; Bodmann, Bernhard G.The inverse scattering problem has enormous importance both for practical and theoretical applications, such as seismic exploration, nondestructive testing, and medical imaging. Based on the early work of Jost and Kohn \cite{jost52}, Moses \cite{Moses56}, Razavy \cite{Razavy75} and Prosser \cite{prosser1969}, Weglein and co-workers have pioneered inverse scattering series methods that require no assumed propagation velocity model. Kouri and Vijay formulated the 1-D acoustic scattering series in terms of a Volterra kernel with reflection and transmission data\cite{Kouri03}. It can be further proved that the Born-Neumann series solution of the Volterra equation converges absolutely, irrespective of the strength of the velocity interaction. Following this previous work of Kouri, higher orders of the Volterra Inverse Scattering Series (VISS) with reflection and transmission data ($R_k/T_k$) are analyzed here. In addition, for the seismic exploration applications, we also extended the VISS approach to the case where only the reflection data is available. The cases of single square barriers or wells and Gauss barriers and wells are studied to illustrate how well the Volterra Inverse Scattering Series performs the inversion. The results demonstrate that the Volterra inverse scattering series method is an effective tool in inverse scattering.Item Large Deviations Approach for Stochastic Genetic Evolution(2013-08) Aggarwal, Aanchal; Azencott, Robert; Timofeyev, Ilya; Bodmann, Bernhard G.; Azevedo, Ricardo B. R.Theoretical ecologists have long strived to explain how the persistence of populations depends on biotic and abiotic factors and have proposed various models to predict the long time behavior of biological populations. We are interested in modeling the effects of natural selection and adaptation in a bacterial population of $Escherichia\; coli$, one of the most intensively studied organisms on Earth. A distinctive signature of living systems is Darwinian evolution, that is, a tendency to generate as well as self-select individual diversity. Mathematical models built to describe this natural dynamics of populations must be rooted in the microscopic, stochastic description of discrete individuals characterized by one or several adaptive traits and interacting with each other. The simplest models assume asexual reproduction and haploid genetics, where an offspring usually inherits the trait values of her progenitor, except when a mutation causes the offspring to take a mutation step to new and different trait values and selection follows from ecological interactions among individuals. In this dissertation we borrow results from large deviation theory to predict the most likely evolutionary trajectories for genetic traits in a given bacterial population leading from known initial multi-species frequencies to terminal domination by mutants with highest fitness. To compute the most likely evolution path, we seek the trajectory with minimal large deviations cost among all genetic evolution trajectories. The goal thus reached is to compute the most likely evolutionary steps which brought an actually observed terminal overwhelming dominance by a new mutant.Item Phase Retrieval for Finitely-Supported Complex Measures via the Fourier Transform(2022-05-06) Abouserie, Ahmed; Bodmann, Bernhard G.; Mixon, Dustin G.; Labate, Demetrio; Kalantar, MehrdadWe study the recovery of a finitely-supported complex measure $\mu=\sum_{j=1}^{s}c_{j}\delta_{t_{j}}$ from the magnitudes of linear measurements. The distribution $\mu$ is completely determined by the amplitude vector $c\in\mathbb{C}^{s}$ and the support set $\{t_{1},t_{2},\dots,t_{s}\}\subset [0,\Lambda]$, where $\Lambda>0$ is assumed to be known. We show that by using magnitudes of point evaluations of the Fourier transform $\widehat{\mu}$ of $\mu$ at $\{v_{1},v_{2},\dots,v_{n}\}\subset[-\Omega,\Omega]$, along with magnitudes of differences of modulated point evaluations of $\widehat{\mu}$, we can construct injective maps over the space of all such measures of support length at most $s$. We follow a measurement design by Alexeev et al. \cite{alexeev_bandeira_fickus_mixon_2014} whereby point evaluations of $|\widehat{\mu}|^{2}$ are encoded as vertices of a graph, and edges of this graph correspond to interference measurements. In particular, if $\Lambda\Omega\frac{6(1+\sfrac{6}{\ln{(\sfrac{s}{\Lambda\Omega})}})s}{1-\sfrac{2\sqrt{d-1}}{d}}$ vertices, then a set of $M=(d+1)n$ magnitude measurements associated with $\Gamma$ is sufficient for identifying $\mu$ up to an overall unimodular multiplicative constant. Under some additional assumptions, we provide two recovery algorithms. The first algorithm is based on phase propagation and the Prony method. We show that the reconstruction problem can be reduced to applying linear inverses and finding roots of a polynomial in the case of exact measurements for almost every signal of the above form. In the second algorithm, at the cost of introducing a truncation error, we follow the technique presented by Cand\`es and Fernandez-Granda in \cite{cand'es_fernandez-granda_2013} to show that the solution to a total-variation norm minimization problem defined by the given intensity measurements yields an approximation of $\mu$. We give explicit error bounds for recovery using this method depending on the number of given samples, and discuss the effect of noise in this approach.Item Phase Retrieval from Random One-Bit Measurements(2020-05) Domel-White, Dylan Samuel; Bodmann, Bernhard G.; Blecher, David P.; Foucart, Simon; Vershynina, AnnaPhase retrieval in real or complex Hilbert spaces is the task of recovering a vector, up to an overall unimodular multiplicative constant, from norms of projections onto subspaces. This dissertation deals with phase retrieval of normalized vectors after the norms of projections are quantized by pairwise comparison to retain only one bit of information. In more specific, geometric terms, we choose a sequence of pairs of subspaces in a real or complex Hilbert space and only record which subspace from each pair is closer to the input vector. The main goal of this paper is to find a feasible algorithm for approximate recovery based on the qualitative information gained about the vector from these binary questions, and to establish error bounds for the approximate recovery procedure. The recovery algorithm we define uses the qualitative proximity information encoded in the binary measurement of an input vector to assemble an auxiliary matrix, and then chooses a unit vector in the principal eigenspace of this auxiliary matrix as the estimate for the input vector. For this measurement and recovery procedure, we provide a pointwise bound for fixed input vectors and a uniform bound that controls the worst-case scenario among all inputs. Both bounds hold with high probability with respect to a choice of subspaces from the uniform distribution induced by the action of the orthogonal or unitary group. For real or complex vectors of dimension $n$, the pointwise bound requires $m \geq C \delta^{-2} n \log(n)$ and the uniform bound $m \ge C \delta^{-2} n^2 \log(\delta^{-1} n)$ binary questions in order to achieve a reconstruction accuracy of $\delta$. The accuracy $\delta$ is measured by the operator norm of the difference between the rank-one orthogonal projections corresponding to the normalized input vector and its approximate recovery. After establishing the pointwise and uniform error bounds for noiseless binary measurements, we consider the case of noisy measurements. Noise for a binary-valued measurement takes the form of bit-flips that corrupt the proximity information encoded in the binary measurement. We show that our measurement and recovery scheme is robust in the presence of a percentage of adversarial bit-flips on the order of $\frac{1}{\sqrt{n}}$. We also consider random bit-flips and show in this setting that the mean squared error of reconstruction decays with respect to the number of projections $m$ on the order of $\frac{\log(m)}{m}$.Item Pricing Multi-Asset Options with Multivariate Variance Gamma and Normal Inverse Gaussian Processes Using the Fourier Space Time-Stepping Method(2018-12) Meerscheidt, Kyle 1982-; Kao, Edward P. C.; Auchmuty, Giles; Bodmann, Bernhard G.; Heier, Gordon; Pirrong, CraigWe use a multivariate variance gamma process developed by Jun Wang (2009) and a similarly constructed multivariate normal inverse Gaussian process to price multi-asset options and calculate greeks with the Fourier space time-stepping (FST) method introduced by Jackson, Jaimungal, and Surkov (2007). The prices are checked against Monte Carlo simulations to demonstrate their accuracy, and we see a marked improvement in computational efficiency. Included are options on the spark spread, the crack spread, and the crush spread, as well as other exotic options that are difficult to price with existing methods. We also adopt a parameter estimation method by Cervellera and Tucci (2016) for variance gamma processes, and adapt it for use with normal inverse Gaussian processes, to make parameter estimates for the marginal processes that are robust with respect to small perturbations of the data.Item Region-of-Interest Reconstruction from Truncated Cone-Beam CT(2016-08) Chowdhury, Tasadduk 1985-; Azencott, Robert; Labate, Demetrio; Bodmann, Bernhard G.; Shah, Shishir KiritThis thesis presents a novel algorithm in 3D computed tomography (CT) dedicated to accurate region of interest (ROI) reconstruction from truncated cone-beam projections. Here data acquisition involves cone-beam x-ray sources positioned on any piecewise smooth 3D-curve satisfying the very generic, classical Tuy's conditions and uses only x-rays passing through the ROI. Our ROI-reconstruction algorithm implements an iterative procedure where we systematically alternate intermediary reconstructions by an exact non-truncated cone-beam inversion operator, with an effective density regularization method. We validate the accuracy of our ROI-reconstruction algorithm for a 3D Shepp-Logan phantom, a 3D image of a Mouse, and a 3D image of a human jaw, for different cone-beam acquisition curves, including the twin-orthogonal circles and the spherical spiral curve, by simulating ROI-censored cone-beam data and our iterative ROI-reconstruction for a family of spherical ROIs of various radii. The main result is that, provided the density function is sufficiently regular and the ROI radius is larger than a critical radius, our procedure converges to an $\epsilon$-accurate reconstruction of the density function within the ROI. Our extensive numerical experiments compute the critical radius for various accuracy levels $\epsilon$. These results indicate that our ROI reconstruction is a promising step towards addressing the dose-reduction problem in CT imaging.Item Saturating Quantum Relative Entropy Inequalities(2021-05) Chehade, Sarah; Vershynina, Anna; Bodmann, Bernhard G.; Kalantar, Mehrdad; Brannan, MichaelThis dissertation describes the progress made towards understanding several quantum entropies and their mathematical properties. Here, we shall focus on various quantum relative entropies, that measure distinguishability between two quantum states (or two entities). In particular, we will focus all details towards understanding the widely used data processing inequality for quantum relative entropy. The data processing inequality describes how knowledge about quantum states (that describe a quantum system) cannot increase whenever the states undergo some noisy action (or a local operation). In chapter 3, we give a detailed explanation of the existing framework for the Umegaki relative entropy, the α− R´enyi relative entropy, the α− sandwiched R´enyi relative entropy, and a more general family of quantum relative entropies, namely the α − z R´enyi relative entropy. Chapter 4 is dedicated to the mathematical framework and tools we use to contribute towards saturating the data processing inequality of the α − z R´enyi relative entropy. In particular, we focus on jointly concave and jointly convex trace functionals. In Chapter 5, we describe and prove our main results with regards to saturating the data processing inequality for the α − z R´enyi relative entropy, under certain parameters, α and z. We prove necessary and algebraically sufficient conditions to saturate the data processing inequality for the α−z R´enyi relative entropy whenever 1 < α ≤ 2 and α/2 ≤ z ≤ α, provided that z > 1. Moreover, these conditions coincide whenever α = z.