DRSR@DA-IICTThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.http://drsr.daiict.ac.in:802020-08-13T10:32:22Z2020-08-13T10:32:22ZMicrowave Imaging for Breast Cancer Detection using 3D Level Set based Optimization, FDTD Method and Method of MomentsPatel, Hardik Nayankumarhttp://drsr.daiict.ac.in//handle/123456789/7932019-04-11T13:10:16Z2019-01-01T00:00:00ZMicrowave Imaging for Breast Cancer Detection using 3D Level Set based Optimization, FDTD Method and Method of Moments
Patel, Hardik Nayankumar
Microwave imaging is emerging as new diagnostic option for breast cancer detection because of non-ionizing nature of microwave radiation and significant contrast between dielectric properties of healthy and malignant breast tissues. Class III and IV breasts have more than 50% fibro-glandular tissues. So, it is very difficult to detect cancer in class III and IV breasts by using X-ray based mammography. Microwave imaging is very promising for cancer detection in case of dense breasts. Complex permittivity profile of breasts is reconstructed in three dimensions for microwave breast imaging. 3D level set based optimization proposed in this thesis is able to reconstruct proper shape and dielectric property values of breast tissues. Multiple frequency inverse scattering problem formulation improves computational efficiency and accuracy of microwave imaging system because complex number computations are avoided. Measurements of scattered electric fields are taken at five equally spaced frequencies in the range 0.5-2.5 GHz. Class III numerical breast phantom and Debye model are used in multiple frequency inverse scattering problem formulation. There are three unknowns per cell of numerical breast phantom due to Debye model. Linear relationships between Debye parameters are applied to get only static permittivity as unknown per cell of numerical breast phantom. Two level set functions are used to detect breast cancer in 3D level set based optimization. Pixel based reconstruction is replaced by initial guess about static permittivity solution in this modified four stage reconstruction strategy. Frequency hopping method is used to avoid local minima present at particular frequency in the 3D level set based optimization. 3D FDTD solves forward problem efficiently during each iteration of 3D level set method which leads to better reconstruction of static permittivity profile. 3D reconstruction problem is very challenging due to Ill posed system matrix and noisy scattered fields data. Tikhonov and total variation (TV) regularization schemes are used to overcome above challenges. The performance of TV regularization is better than Tikhonov regularization in 3D level set based optimization. TV regularization reconstructs shape and size of very small tumour but it fails to reconstruct exact location of very small tumour. Better 3D reconstruction is achieved by using regularized 3D level set based optimization for at least 20 dB SNR in electric field data. 3D FDTD method based electric field computation in heterogeneous numerical breast phantom is very efficient because it solves Maxwell's equations on grids by using an iterative process. Microwave imaging problem is solved with millions of cells because 3D FDTD is used. Method of moments is used to solve electric field integral equation (EFIE) which estimates complex permittivity of 2048 cell human breast model. Matrix formation and inversion time are reduced to allow large number of cells in breast model. Computational efficiency of the imaging system is improved by exploiting symmetry using group theory. Matrix formed by method of moments is ill posed due to presence of large number of buried cells in inverse scattering formulation. Ill posed system matrix and noise are two major challenges in the solution of inverse scattering problem. Levenberg-Marquardt method is used to solve above challenges.
2019-01-01T00:00:00ZVariants of orthogonal neighborhood preserving projections for image recognitionKoringa, Purvi Amrutlalhttp://drsr.daiict.ac.in//handle/123456789/7892020-06-15T08:58:48Z2018-01-01T00:00:00ZVariants of orthogonal neighborhood preserving projections for image recognition
Koringa, Purvi Amrutlal
With the increase in the resolution of image capturing sensors and data storagecapacity, a huge increase in image data is seen in past decades. This informationupsurge has created a huge challenge for machines to perform tasks such as imagerecognition, image reconstruction etc. In image data, each observation or a pixelcan be considered as a feature or a dimension, thus an image can be represented asa data point in the very high-dimensional space. Most of these high-dimensionalimages lie on or near a low-dimensional manifold. Performing machine learningalgorithms on this high-dimensional data is computationally expensive and usuallygenerates undesired results because of the redundancy present in the imagedata. Dimensionality Reduction (DR) methods exploit this redundancy withinthe high-dimensional image space and explore the underlying low-dimensionalmanifold structure based on some criteria or image properties such as correlation,similarity, pair-wise distances or neighborhood structure.This study focuses on variants of one such DR technique, Orthogonal NeighborhoodPreserving Projections (ONPP). ONPP searches for a low-dimensionalrepresentation that preserves the local neighborhood structure of high-dimensionalspace. This thesis studies and addresses some of the issues with the existingmethod and provides the solution for the same. ONPP is a three-step procedure,in which the first step defines a local neighborhood followed by the secondstep which defines locally linear neighborhood relationship in high-dimensionalspace, the third step seeks a lower-dimensional subspace that preserved the relationshipsought in the second step.The major issues with existing ONPP technique are local linearity assumptioneven with varying size of the neighborhood, strict distance based or classmembership based neighborhood selection rule, non-normalized projections orsusceptibility to the presence of outliers in the data. This study proposes variviiants of ONPP by suggesting modification in each of these steps to tackle abovementioned problems that better suit image recognition application. This thesisalso proposes a 2-dimensional variant that overcomes the limitation of NeighborhoodPreserving Projections (NPP) and Orthogonal Neighborhood PreservingProjections (ONPP) while performing image reconstruction. All the new proposalsare tested on benchmark data-sets of face recognition and handwritten numeralsrecognition. In all cases, the new proposals outperform the conventionalmethod in terms of recognition accuracy with reduced subspace dimensions.Keywords: Dimensionality Reduction, manifold learning, embeddings, NeighborhoodPreserving Projection (NPP), Orthogonal Neighborhood Preserving Projections(ONPP), image recognition, face recognition, text recognition, image reconstruction
2018-01-01T00:00:00ZDistributed TDMA scheduling in tree based wireless sensor networks with multiple data attributes and multiple sinksVasavada, Tejas Mukeshbhaihttp://drsr.daiict.ac.in//handle/123456789/7902020-06-15T08:59:18Z2018-01-01T00:00:00ZDistributed TDMA scheduling in tree based wireless sensor networks with multiple data attributes and multiple sinks
Vasavada, Tejas Mukeshbhai
Data collection is an important application of wireless sensor networks. Sensors are deployed in given region of interest. They sense physical quantity like temperature, pressure, solar radiation, speed and many others. One or more sinks are also deployed in the network along with sensor nodes. The sensor nodes send sensed data to the sink(s). This operation is known as convergecast operation. Once nodes are deployed, logical tree is formed. Every node identi es its parent node to transmit data towards sink. As TDMA (Time Division Multiple Access) completely prevents collisions, it is preferred over CSMA (Carrier Sense Multiple Access). The next step after tree formation is to assign time slot to every node of the tree. A node transmits only during the assigned slot. Once tree formation and scheduling is done, data transfer from sensors to sink takes place. Tree formation and scheduling algorithms may be implemented in centralized manner. In that case, sink node executes the algorithms and informs every node about its parent and time-slot. The alternate approach is to use distributed algorithms. In distributed approach, every node decides parent and slot on its own. Our focus is on distributed scheduling and tree formation. Most of the researchers consider scheduling and parent selection as two di erent prob- lems. Tree structure constrains e ciency of scheduling. So it is better to treat scheduling and tree formation as a single problem. One algorithm should address both in a joint manner. We use a single algorithm to perform both i.e. slot and parent selection. The main contributions of this thesis are explained in subsequent paragraphs. In the rst place, we have addressed scheduling and tree formation for single-sink heterogeneous sensor networks. In a homogeneous network, all nodes are of same type. For example, temperature sensors are deployed in given region. Many applications require use of more than one types of nodes in the same region. For example, sensors are deployed on a bridge to monitor several parameters like vibration, tilt, cracks, shocks and others. So, a network having more than one types of nodes is known as heterogeneous network. If all the nodes of network are of same type, the parent selection is trivial. A node can select the neighbor nearest to sink as parent. In heterogeneous networks, a node may receive di erent types of packets from di erent children. To maximize aggregation, appropriate parent should be selected for each outgoing packet such that packet can be aggregated at parent node. If aggregation is maximized, nodes need to forward less number of packets. So, less number of slots are required and energy consumption would be reduced. We have proposed AAJST (Attribute Aware Joint Scheduling and Tree formation) algorithm for heterogeneous networks. The objective of the algorithm is to maximize aggregation. The algorithm is evaluated using simulations. It is found that compared to traditional approach of parent selection, the proposed algorithm results in 5% to 10% smaller schedule length and 15% to 30% less energy consumption during data transfer phase. Also energy consumption during control phase is reduced by 5%. When large number of nodes are deployed in the network, it is better to use more than one sinks rather than a single sink. It provides fault tolerance and load balancing. Every sink becomes root of one tree. If ner observations are required from a region, more number of nodes are deployed there. That is, node deployment is dense. But the deployment in other regions may not be dense because application does not require the same. When trees are formed, tree passing through the dense region results in higher schedule length compared to the one passing through the sparse region. Thus schedule lengths are not balanced. For example, trees are T1 and T2. Their schedule lengths are SH1 and SH2 respec- tively. Every node in tree Ti will get its turn to transmit after SHi time-slots. If there is a large di erence between SH1 and SH2, nodes of one tree (having large value of SHi) will wait for very long time to get turn to transmit compared to the nodes of the other tree (having small value of SHi). But if SH1 and SH2 are balanced, waiting time would be almost same for all the nodes. Thus schedule lengths should be balanced. Overall sched- ule length (SH) of the network can be de ned as max(SH1,SH2). If schedule lengths are balanced, SH would also be reduced. We have proposed an algorithm known as SLBMHM (Schedule Length Balancing for Multi-sink HoMogeneous Networks). It guides every node to join a tree such that the schedule lengths of resulting trees are balanced. Through simulations, it is found that SLBMHM results 13% to 74% reduction in schedule length di erence. The overall schedule length is reduced by 9% to 24% compared to existing mechanisms. The algorithm results in 3% to 20% more energy consumption during control phase. The control phase involves transfer of control messages for schedule length balancing and for slot & parent selection. The control phase does not take place frequently. It takes place at longer intervals. So, additional energy consumption may not a ect the network lifetime much. No change in energy consumption during data transmission phase is found. The schedule lengths may be unbalanced also due to di erence in heterogeneity levels of regions. For example, in one region, two di erent types of sensors are deployed. But in the other region, four di erent types of sensors are present. When heterogeneity is high, aggregation becomes di cult. As a result, more packets ow through the network. Thus schedule length of the tree passing through region of two types of nodes will have smaller schedule length than the tree passing through the region of four types of nodes. We have proposed an algorithm known as SLBMHT (Schedule Length Balancing for Multi-sink HeTerogeneous Networks). It is an extension of SLBMHM. The proposed algorithm is capable of balancing schedule lengths no matter whether imbalance is caused due to di erence in density or di erence in heterogeneity. It is also evaluated through simulations. It is found that the SLBMHT algorithm results in maximum upto 56% reduction in schedule length di erence, maximum upto 20% reduction in overall schedule length and 2% to 17% reduction in energy consumption per TDMA frame during data transfer phase. It results in maximum 7% more energy consumption during control phase. As control phase does not take place very frequently, increase in energy consumption during control phase can be balanced by reduction in energy consumption during data phase. As a result, network lifetime is going to increase.
2018-01-01T00:00:00ZDownsampling of signals on graphs: an algebraic perspectiveVaishnav, Nileshkumarhttp://drsr.daiict.ac.in//handle/123456789/7912020-06-15T08:59:59Z2018-01-01T00:00:00ZDownsampling of signals on graphs: an algebraic perspective
Vaishnav, Nileshkumar
Real-world data such as weather data, seismic activity data, sensor networks data and social network data can be represented and processed conveniently using a mathematical structure called Graph. Graphs are a collection of vertices and edges. The relational structure between the vertices can be represented in form of a matrix called the adjacency matrix. A Graph Signal is a signal supported on a given graph. The framework of processing of signals on graphs is called Graph Signal Processing (GSP). Various signal processing concepts (e.g. Fourier Transform, filtering, translation, downsampling) need to be defined in the context of graphs. A common approach is to define a Fourier Transform for a graph (called Graph Fourier Transform - GFT), and use it to define other signal processing concepts. There are two popular approaches to define GFT for a graph: 1) using Graph Laplacian 2) using adjacency matrix. In the first method, GFT is interpreted as expansion of a given signal in the eigenvectors of the Graph Laplacian. The second method, i.e., using the adjacency matrix, results in an algebraic framework for graph signals and shift invariant filters. In the study of Graph Signal Processing, we often encounter signals which are smooth in nature. Such signals, which have low variations, can be represented efficiently using samples on fewer number of vertices. The process of selecting such vertices is called graph downsampling. As graphs do not exhibit a natural ordering of data, selection of vertices is not trivial. In this thesis, we analyze a class of graphs called Bipartite Graphs from downsampling perspective and then provide a GFT based approach to downsample signals on arbitrary graphs. For bandlimited signals on a graph, a test is provided to identify whether signal reconstruction is possible from the given downsampled signal. Moreover, if the signal is not bandlimited, we provide a quality measure for comparing different downsampling schemes. Using this quality measure, we propose a greedy downsampling algorithm. The proposed method is applicable to directed graphs, undirected graphs, and graphs with negative edge-weights. We provide several experiments demonstrating our downsampling scheme, and compare our quality measure with other existing measures (e.g. cut-index). We also provide a method to assign adjacency matrix to the downsampled vertices using an analogy from the bipartite graphs. We also examine the concepts of homomorphism and isomorphism between two graphs from signal processing point of view, and refer to them as GSP-isomorphism and GSP-homomorphism, respectively. Collectively, we refer to these concepts as Structure Preserving Maps. The fact that linear combination of signals and linear transforms on signals are meaningful operations has implications on the GSP-isomorphism and GSP-homomorphism, which diverges from the topological interpretations of the same concepts (i.e. graph-isomorphism and graphhomomorphism). When Structure Preserving Maps exist between two graphs, signals and filters can be mapped between them while preserving spectral properties. We examine conditions on adjacency matrices for such maps to exist. We also show that isospectral graphs form a special case of GSP-isomorphism and that GSP-isomorphism and GSP-homomorphism is intrinsic to resampling and downsampling process.
2018-01-01T00:00:00Z