DRSR@DA-IICTThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.http://drsr.daiict.ac.in:802020-05-27T11:08:16Z2020-05-27T11:08:16ZMicrowave Imaging for Breast Cancer Detection using 3D Level Set based Optimization, FDTD Method and Method of MomentsPatel, Hardik Nayankumarhttp://drsr.daiict.ac.in//handle/123456789/7932019-04-11T13:10:16Z2019-01-01T00:00:00ZMicrowave Imaging for Breast Cancer Detection using 3D Level Set based Optimization, FDTD Method and Method of Moments
Patel, Hardik Nayankumar
Microwave imaging is emerging as new diagnostic option for breast cancer detection because of non-ionizing nature of microwave radiation and significant contrast between dielectric properties of healthy and malignant breast tissues. Class III and IV breasts have more than 50% fibro-glandular tissues. So, it is very difficult to detect cancer in class III and IV breasts by using X-ray based mammography. Microwave imaging is very promising for cancer detection in case of dense breasts. Complex permittivity profile of breasts is reconstructed in three dimensions for microwave breast imaging. 3D level set based optimization proposed in this thesis is able to reconstruct proper shape and dielectric property values of breast tissues. Multiple frequency inverse scattering problem formulation improves computational efficiency and accuracy of microwave imaging system because complex number computations are avoided. Measurements of scattered electric fields are taken at five equally spaced frequencies in the range 0.5-2.5 GHz. Class III numerical breast phantom and Debye model are used in multiple frequency inverse scattering problem formulation. There are three unknowns per cell of numerical breast phantom due to Debye model. Linear relationships between Debye parameters are applied to get only static permittivity as unknown per cell of numerical breast phantom. Two level set functions are used to detect breast cancer in 3D level set based optimization. Pixel based reconstruction is replaced by initial guess about static permittivity solution in this modified four stage reconstruction strategy. Frequency hopping method is used to avoid local minima present at particular frequency in the 3D level set based optimization. 3D FDTD solves forward problem efficiently during each iteration of 3D level set method which leads to better reconstruction of static permittivity profile. 3D reconstruction problem is very challenging due to Ill posed system matrix and noisy scattered fields data. Tikhonov and total variation (TV) regularization schemes are used to overcome above challenges. The performance of TV regularization is better than Tikhonov regularization in 3D level set based optimization. TV regularization reconstructs shape and size of very small tumour but it fails to reconstruct exact location of very small tumour. Better 3D reconstruction is achieved by using regularized 3D level set based optimization for at least 20 dB SNR in electric field data. 3D FDTD method based electric field computation in heterogeneous numerical breast phantom is very efficient because it solves Maxwell's equations on grids by using an iterative process. Microwave imaging problem is solved with millions of cells because 3D FDTD is used. Method of moments is used to solve electric field integral equation (EFIE) which estimates complex permittivity of 2048 cell human breast model. Matrix formation and inversion time are reduced to allow large number of cells in breast model. Computational efficiency of the imaging system is improved by exploiting symmetry using group theory. Matrix formed by method of moments is ill posed due to presence of large number of buried cells in inverse scattering formulation. Ill posed system matrix and noise are two major challenges in the solution of inverse scattering problem. Levenberg-Marquardt method is used to solve above challenges.
2019-01-01T00:00:00ZDownsampling of Signals on Graphs: An Algebraic PerspectiveVaishnav, Nileshkumarhttp://drsr.daiict.ac.in//handle/123456789/7912019-03-20T09:56:54Z2018-01-01T00:00:00ZDownsampling of Signals on Graphs: An Algebraic Perspective
Vaishnav, Nileshkumar
Real-world data such as weather data, seismic activity data, sensor networks data and social network data can be represented and processed conveniently using a mathematical structure called Graph. Graphs are a collection of vertices and edges. The relational structure between the vertices can be represented in form of a matrix called the adjacency matrix. A Graph Signal is a signal supported on a given graph. The framework of processing of signals on graphs is called Graph Signal Processing (GSP). Various signal processing concepts (e.g. Fourier Transform, filtering, translation, downsampling) need to be defined in the context of graphs. A common approach is to define a Fourier Transform for a graph (called Graph Fourier Transform - GFT), and use it to define other signal processing concepts. There are two popular approaches to define GFT for a graph: 1) using Graph Laplacian 2) using adjacency matrix. In the first method, GFT is interpreted as expansion of a given signal in the eigenvectors of the Graph Laplacian. The second method, i.e., using the adjacency matrix, results in an algebraic framework for graph signals and shift invariant filters. In the study of Graph Signal Processing, we often encounter signals which are smooth in nature. Such signals, which have low variations, can be represented efficiently using samples on fewer number of vertices. The process of selecting such vertices is called graph downsampling. As graphs do not exhibit a natural ordering of data, selection of vertices is not trivial. In this thesis, we analyze a class of graphs called Bipartite Graphs from downsampling perspective and then provide a GFT based approach to downsample signals on arbitrary graphs. For bandlimited signals on a graph, a test is provided to identify whether signal reconstruction is possible from the given downsampled signal. Moreover, if the signal is not bandlimited, we provide a quality measure for comparing different downsampling schemes. Using this quality measure, we propose a greedy downsampling algorithm. The proposed method is applicable to directed graphs, undirected graphs, and graphs with negative edge-weights. We provide several experiments demonstrating our downsampling scheme, and compare our quality measure with other existing measures (e.g. cut-index). We also provide a method to assign adjacency matrix to the downsampled vertices using an analogy from the bipartite graphs. We also examine the concepts of homomorphism and isomorphism between two graphs from signal processing point of view, and refer to them as GSP-isomorphism and GSP-homomorphism, respectively. Collectively, we refer to these concepts as Structure Preserving Maps. The fact that linear combination of signals and linear transforms on signals are meaningful operations has implications on the GSP-isomorphism and GSP-homomorphism, which diverges from the topological interpretations of the same concepts (i.e. graph-isomorphism and graphhomomorphism). When Structure Preserving Maps exist between two graphs, signals and filters can be mapped between them while preserving spectral properties. We examine conditions on adjacency matrices for such maps to exist. We also show that isospectral graphs form a special case of GSP-isomorphism and that GSP-isomorphism and GSP-homomorphism is intrinsic to resampling and downsampling process.
2018-01-01T00:00:00ZSpectrum Sensing for Cognitive RadioManharlal, Captain Kamalhttp://drsr.daiict.ac.in//handle/123456789/7922019-03-20T09:57:58Z2019-01-01T00:00:00ZSpectrum Sensing for Cognitive Radio
Manharlal, Captain Kamal
Due to the rapid growth of new wireless communication services and applications, need for radio frequency (RF) spectrum is continuously increasing. Most of the available RF spectrum is already been licensed to the existing wireless systems. On the other hand, it is found that spectrum is significantly underutilized due to the static frequency allocation to the dedicated users and hence the spectrum holes or spectrum opportunities arise. Considering the scarce RF spectrum, supporting new services and applications is a challenging task that requires innovative technologies capable of providing new ways of exploiting the available radio spectrum. Cognitive Radio (CR) has received immense research attention, both in the academia and industry, as it is considered a promising solution to the problem of spectrum scarcity by introducing the notion of opportunistic spectrum usage. A CR is a device that senses the spectrum of licensed users (also known as primary users) for spectrum opportunities, and transmits its data only when the spectrum is sensed to be not occupied. For the efficient utilization of the spectrum while limiting the interference to the licensed users, the CR should be able to sense the spectrum occupancy quickly as well as accurately. This makes spectrum sensing one of the main functionalities of the cognitive radio. Spectrum sensing is a hypothesis testing problem, where the goal is to test whether the primary user is inactive (the null or noise only hypothesis), or not (the alternate or signal present hypothesis). Spectrum sensing can be broadly classified into two types, namely, narrowband and wideband sensing. Narrowband sensing is used for finding the occupancy status of a single licensed band where as wideband sensing deals with the scenario where multiple licensed bands are sensed for spectrum opportunities. In this thesis, our focus is on the analysis of the existing spectrum sensing algorithm considering practical scenarios and propose novel techniques for spectrum sensing. Energy detection (ED) also known as conventional energy detection (CED) based spectrum sensing is a very popular technique due to its simplicity and reduced computational complexity. In our first work, we analyze the ED based narrowband spectrum sensing over h ?? l ?? m fading channel model. It is a general model and includes other fading models as its special cases and can be used to study the performance of ED under practical scenarios. The performance improvement is shown using antenna diversity and cooperative sensing. The analysis is then extended to the case when there exists shadowing in addition to fading. ED is generalized by changing the squaring operation while computing energy by an arbitrary positive number p which is known as generalized energy detector (GED). To decide the threshold for GED, the true value of the noise variance is required but in practice only its expected value is known. The true value of noise variance varies over time and location giving rise to noise uncertainty. Due to this there exist a phenomenon known as signal to noise radio (SNR) wall which says that in the presence of noise uncertainty below certain SNR value known as the SNR wall, it is not possible to detect the presence of signal even if very large number of samples are taken for detection. In our second work, we study the SNR wall for GED considering no diversity, diversity and cooperative sensing scenarios under noise uncertainty and fading. All the derived expressions are validated using Monte Carlo simulations. In literature, the use of antenna diversity to improve the detection performance of narrowband spectrum sensing is extensively studied. In our next work, we propose new detection algorithms that make use of square law combining (SLC) and square law selection (SLS) diversities for wideband spectrum sensing. We provide complete theoretical analysis of the proposed algorithms and validate them using Monte Carlo simulations. The performance improvement is shown against the algorithm that do not use diversity. We also study the effects of different parameters on the performance of the proposed algorithms. An alternative to antenna diversity is the cooperative spectrum sensing where multiple secondary users also known as cooperating secondary users collaborate by sharing their sensing information for the detection of the spectrum opportunities. Finally, in our last work, we propose novel detection algorithm for cooperative wideband spectrum sensing. We make use of hard combining for data fusion since it minimizes the bandwidth requirements of the control channel. We show that the proposed algorithm performs better than algorithm without cooperative sensing. Also, it performs better than our previously proposed algorithms that use antenna diversity by choosing appropriate number of cooperating secondary users. We also study the effects of different parameters on the performance.
2019-01-01T00:00:00ZVariants of Orthogonal Neighborhood Preserving Projections for Image RecognitionKoringa, Purvi Amrutlalhttp://drsr.daiict.ac.in//handle/123456789/7892019-03-20T09:57:31Z2018-01-01T00:00:00ZVariants of Orthogonal Neighborhood Preserving Projections for Image Recognition
Koringa, Purvi Amrutlal
With the increase in the resolution of image capturing sensors and data storagecapacity, a huge increase in image data is seen in past decades. This informationupsurge has created a huge challenge for machines to perform tasks such as imagerecognition, image reconstruction etc. In image data, each observation or a pixelcan be considered as a feature or a dimension, thus an image can be represented asa data point in the very high-dimensional space. Most of these high-dimensionalimages lie on or near a low-dimensional manifold. Performing machine learningalgorithms on this high-dimensional data is computationally expensive and usuallygenerates undesired results because of the redundancy present in the imagedata. Dimensionality Reduction (DR) methods exploit this redundancy withinthe high-dimensional image space and explore the underlying low-dimensionalmanifold structure based on some criteria or image properties such as correlation,similarity, pair-wise distances or neighborhood structure.This study focuses on variants of one such DR technique, Orthogonal NeighborhoodPreserving Projections (ONPP). ONPP searches for a low-dimensionalrepresentation that preserves the local neighborhood structure of high-dimensionalspace. This thesis studies and addresses some of the issues with the existingmethod and provides the solution for the same. ONPP is a three-step procedure,in which the first step defines a local neighborhood followed by the secondstep which defines locally linear neighborhood relationship in high-dimensionalspace, the third step seeks a lower-dimensional subspace that preserved the relationshipsought in the second step.The major issues with existing ONPP technique are local linearity assumptioneven with varying size of the neighborhood, strict distance based or classmembership based neighborhood selection rule, non-normalized projections orsusceptibility to the presence of outliers in the data. This study proposes variviiants of ONPP by suggesting modification in each of these steps to tackle abovementioned problems that better suit image recognition application. This thesisalso proposes a 2-dimensional variant that overcomes the limitation of NeighborhoodPreserving Projections (NPP) and Orthogonal Neighborhood PreservingProjections (ONPP) while performing image reconstruction. All the new proposalsare tested on benchmark data-sets of face recognition and handwritten numeralsrecognition. In all cases, the new proposals outperform the conventionalmethod in terms of recognition accuracy with reduced subspace dimensions.Keywords: Dimensionality Reduction, manifold learning, embeddings, NeighborhoodPreserving Projection (NPP), Orthogonal Neighborhood Preserving Projections(ONPP), image recognition, face recognition, text recognition, image reconstruction
2018-01-01T00:00:00Z