PhD Theses
Permanent URI for this collectionhttp://drsr.daiict.ac.in/handle/123456789/2
Browse
55 results
Search Results
Item Open Access Some New Techniques for Motion-free Super-resolution(Dhirubhai Ambani Institute of Information and Communication Technology, 2010) Gajjar, Prakashchandra Purushottamdas; Joshi, Manjunath V.Digital image processing has exhibited a tremendous growth during past decades in termsof both the theoretical developments and applications. At present, image processing andcomputer vision are the leading technologies in a number of areas that includes digitalcommunication, medical imaging, the Internet, multimedia, manufacturing, remote sens-ing, biometrics and robotics. The recent increase in the widespread use of cheaper digitalimaging terminals such as personal digital assistants, cellular phones, digital camera, highde nition TV and computers in consumer market has brought with it a simultaneous de-mand for higher-resolution (HR) images and video. Since the high resolution images andvideo carry more details and subtle gray level transitions, they o er pleasant views of thepictures and videos on these devices. In commercial and industrial applications the highresolution images are desired as they lead to better analysis, interpretation and classi-cation of the information in the images. High resolution images provide better detailsthat are critical in many imaging applications such as medical imaging, remote sensing,surveillance.The resolution of the image captured using a digital camera depends on the number ofthe photo detectors in the optical sensors. Increasing the density of the photo detectorsleads to high resolution images. The current hardware approach to capture images withhigh resolution relies on sensor manufacturing technology that attempts to increase thenumber of pixels per unit area by reducing the pixel size. The cost for such sensorsand related high-precision optics may be prohibitively high for consumer and commercialapplications. Further, there is a limitation to pixel size reduction due to shot noiseencountered in the sensor itself. Since the current sensor manufacturing technology hasalmost reached this limit, the hardware approach is no more helpful beyond this limit.One promising solution is to use signal processing approaches based on computational,mathematical, and statistical techniques. Because of the recent emergence of these key-relevant techniques, resolution enhancement algorithms have received a great deal ofattention. Super-resolution is an algorithmic approach to reconstruct high resolutionimage using one or more low resolution images. The main advantages of the approach arethat it costs less, it is easy to implement and the existing low resolution imaging systemscan be used without any additional expense. The application of such algorithms will xcertainly grow in situations where high quality optical imaging systems are too expensiveto utilize.The motion based super-resolution approaches produce a high resolution image usingnon-redundant information from the multiple sub-pixel shifted low resolution observa-tions. The di culty in these approaches is the estimation of motion between the lowresolution frames at a sub-pixel accuracy. Motion-free super-resolution techniques allevi-ate this problem by using cues other than motion cue. The additional observations aregenerated without introducing relative motion among them.In this thesis, we present learning based approaches for motion-free super-resolution.First we solve the super-resolution problem using zoom cue. The observations of a staticscene are captured by varying the zoom setting of a camera. The least zoomed imagecontaining the entire scene is super-resolved at the resolution of the most zoomed imagewhich contains a small area of the entire scene. Generally, the decimation process ismodeled as the averaging process and the aliased pixel in the low resolution image isobtained by averaging the corresponding pixels in the high resolution image. However,aliasing depends on several factors such as zooming and camera hardware. This motivatesus to estimate the aliasing. Since a part of the scene is available at high resolution inthe most zoomed images, we make use of the same to estimate the aliasing on the lesserzoomed observations. The aliasing is estimated using the most zoomed image and thelesser zoomed images. We represent the super-resolved image using Markov random elds(MRF) and obtain super-resolution using maximum a posteriori technique. We demon-strate the application of proposed aliasing learning technique to the fusion of remotelysensed image. While experimenting, the MRF prior model parameters were adjusted ontrial and error basis. A better solution can be obtained using the parameters estimatedfrom the observations themselves. The estimation of the parameters requires the compu-tation of the partition function. Since it is a computationally intensive technique, we useautoregressive (AR) model to represent the super-resolved image. The AR prior modelparameters are obtained from the most zoomed observation. We apply this technique tothe fusion in remotely sensed images.The spatial features of a low resolution image are related to its high resolution version.The analytical representation of the relationship of the spatial features across the scalesis di cult. This motivates us to prepare a database of low resolution images and its high resolution versions all captured using same real camera and use this database to obtainhigh frequency details of the super-resolved image. We propose wavelet based new learn-ing approach using this database and obtain close approximation to the super-resolvedimage. The close approximation is used as an initial estimate while minimizing the costfunction. We employ a prior model that can adapt to the local structure of the image andestimate the model parameters as well as the aliasing from the close approximation. Theproposed approach is extended to super-resolve color images. We learn the details of thechrominance components using wavelet based interpolation technique and super-resolvethe luminance component using the proposed approach. We show the results for graylevel images and for color images and compare the them with existing techniques.In most current image acquisition systems in handheld devices, images are compressedprior to digital storage and transmission. Since, the discrete cosine transform (DCT) isthe basis of many popular codecs such as JPEG, MPEG and H.26X, we consider the DCTfor learning. The use the DCT for learning alleviates the limitations of wavelet basedlearning approach that it can not recover the edges oriented along arbitrary directions.We propose a learning based approach in the discrete cosine transform domain to learnthe ner details of the super-resolved image from the database of low resolution (LR)images and their high resolution (HR) versions. Regularization using the homogeneousprior model imposes the smoothness constraint everywhere in the image and leads tosmooth solution. To preserve edges and ner details, we represent the super-resolvedimage using nonhomogeneous AR prior model and solve the single frame super-resolutionproblem in regularization framework.Finally, we readdress the zoom based super-resolution problem using discontinuitypreserving MRF prior in order to prevent the distortions across the edges while optimiza-tion. We obtain the close approximation for the super-resolved image using the learningbased approach and use it to estimate the model parameters and the aliasing. Since thecost function consists of a linear term and a non-linear term, it cannot be optimized usingsimple gradient descent optimization technique. The global optimization technique suchas simulated annealing can be employed. Since it is computationally taxing, we proposethe use of particle swarm optimization technique. We show the computational advantageof the proposed approach.Item Open Access Voice conversion: alignment and mapping perspective(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Shah, Nirmesh J.; Patil, Hemant A.Understanding how a particular speaker is producing speech, and mimicking one's voice is a difficult research problem due to the sophisticated mechanism involved in speech production. Voice Conversion (VC) is a technique that modifies the perceived speaker identity in a given speech utterance from a source speaker to a particular target speaker without changing the linguistic content. Each standalone VC system building consists of two stages, namely, training and testing. First, speaker-dependent features are xtracted from both speakers' training data.These features are first time aligned and corresponding pairs are obtained. Then a mapping function is learned among these aligned feature-pairs. Once the training step is done, during the testing stage, features are extracted from the source speaker's held out data. These features are converted using the mapping function. The converted features are then passed through the vocoder that will produce a converted voice. Hence, there are primarily three components of the stand-alone VC system building, namely, the alignment step, the mapping function, and the speech analysis/synthesis framework. Major contributions of this thesis are towards identifying the limitations of existing techniques, improving it, and developing new approaches for the mapping, and alignment stages of the VC. In particular, a novel Amplitude Scaling (AS) method is proposed for frequency warping (FW)-based VC, which linearly transfers the amplitude of the frequency-warped spectrum using the knowledge of a Gaussian Mixture Model (GMM)-based converted spectrum without adding any spurious peaks. To overcome the issue of overfitting in Deep Neural Network (DNN)-based VC, the idea of pre-training is popular. However, this pre-training is time-consuming, and Equires a separate network to learn the parameters of the network. Hence, whether this additional pre-training step could be avoided by using recent advances in deep learning is investigated in this thesis. The ability of Generative Adversarial Network (GAN) in estimating probability density function (pdf) for generating the realistic samples corresponding to the given source speaker's utterance resulted in a significant performance improvement in the area of VC. The key limitation of the vanilla GAN-based system is in generating the samples that may not correspond to the given source speaker's utterance. To address this issue, Minimum Mean Squared Error (MMSE) regularized GAN (i.e.,MMSE-GAN) is proposed in this thesis.Obtaining corresponding feature pairs in the context of both parallel as well as non-parallel VC is a challenging task. In this thesis, the strengths and limitations of the different existing alignment strategies are identified, and new alignment strategies are proposed for both parallel and non-parallel VC task. Wrongly aligned pairs will affect the learning of the mapping function, which in turn will deteriorate the quality of the converted voices. In order to remove such wrongly aligned pairs from the training data, outlier removal-based pre-processing technique is proposed for the parallel VC. In the case of non-parallel VC, theoretical convergence proof is developed for the popular alignment technique, namely, Iterative combination of a Nearest Neighbor search step and a Conversion step Alignment (INCA). In addition, the use of dynamic features along with static features to calculate the Nearest Neighbor (NN) aligned pairs in the existing INCA, and Temporal context (TC) INCA is also proposed. Furthermore, a novel distance metric is learned for the NN-based search strategies, as Euclidean distance may not correlate well with the perceptual distance. Moreover, computationally simple Spectral Transition Measure (STM)-based phone alignment technique that does not require any apriori training data is also proposed for the non-parallel VC. Both the parallel and the non-parallel alignment techniques will generate oneto-many and many-to-one feature pairs. These one-to-many and many-to-one pairs will affect the learning of the mapping function and result in the muffling and oversmoothing effect in VC. Hence, unsupervised Vocal Tract Length Normalization (VTLN) posteriorgram, and novel inter mixture weighted GMM Posteriorgram as a speaker-independent representation in the two-stage mapping network is proposed in order to avoid the alignment step from the VC framework. In this thesis, an attempt has also been made to use the acoustic-to-articulatory inversion (AAI) technique for the quality assessment of the voice converted speech. Lastly, the proposed MMSE-GAN architecture is extended in the form of Discover GAN (i.e., MMSE DiscoGAN) for the cross-domain VC applications (w.r.t.attributes of the speech production mechanism), namely, Non-Audible Murmur (NAM)-to-WHiSPer (NAM2WHSP) speech conversion, and WHiSPer-to-SPeeCH (WHSP2SPCH) conversion. Finally, thesis summarizes overall work presented, limitations of various approaches along with future research directions.Item Open Access Investigating into a light-weight reconfigurable VLSI architecture for biomedical signal processing applications(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Jain, Nupur; Mishra, BiswajitThe Body Sensor Network systems consist of signal acquisition and processing blocks along with Power Management Unit and radio transmission capabilities. The high power consumption of the radio transmission is often eliminated by adopting the on-node processing through signal processing platform with increased computation ability. Dedicated hardware accelerators optimized for operations predominantly seen in biomedical signal processing algorithms are oftenused in tandem with a microprocessor for this purpose. However, they do not support further algorithm improvements and optimizations owing to their dedicated nature. The benefits of configurability can be found in reconfigurable architectures at the cost of reconfiguration overheads. The shift-accumulate architecture developed in this thesis leverage the regularity in dominant functions in biomedical signal processing and thereby yields gate count advantages. The configurable datapath of the architecture renders multiple DSP operation emulation by means of mapping methodologies developed for efficient realization in terms of hardware utilization and memory accesses. The architecture exhibits various topologies which further supports efficient function realization. The configuration scheme of the architecture is developed which effectively consist of control word and tightly coupled data memory. The architecture is realized on a Filed Programmable Gate Array (FPGA) platform demonstrating the target function emulation and hardware results are compared with ideal outcomes. The Video Graphics Array (VGA) and Universal Asynchronous Receiver Transmitter (UART) interface controllers are developed in this work for error quantification and analysis. The architecture contains a 6 6 array of functional units having shift-accumulate as its underlying operation and has gate count of 25k and 46.9 MHz operating frequency while emulating 36-tap FIR, CORDIC, DCT, DWT, moving average, squaring and differentiation functions. Generally, biomedical signal processing functions include multiple stages consisting of noise removal, feature detection and extraction etc. The on-the-fly reconfigurability is incorporated into the architecture that leverage the low input datarates of biosignals. The architecture reconfigures dynamically while realizing different functions of the signal chain. The memory adapts to the incoming target function and supports 7 functions in its present structure. However, the architecture and memory remains scalable. Pan-Tompkins Algorithm based QRS detection realization is demonstrated on the architecture using the reconfigurability. This work offers 4 reduced area and 2.3 increase in performance with respect to the existing contemporary literatures.Item Open Access On heterogeneous distributed storage systems: bounds and code constructions(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Gopal, Krishna; Manish K. GuptaIn Distributed Storage Systems (DSSs), usually, data is stored using encoded packets on different chunk servers. In this thesis, we have considered heterogeneous DSSs in which each node may store a different number of packets and each having different repair bandwidth. In particular, a data collector can reconstruct the file at time t using some specific nodes in the system, and for arbitrary node failure, the system can be repaired by some set of arbitrary nodes. Using min-cut bound, we investigate the fundamental trade-off between storage and repair cost for our model of heterogeneous DSS. In particular, the problem is formulated as a biobjective optimization linear programming problem. For an arbitrary DSS, it is shown that the calculated min-cut bound is tight. For a DSS with symmetric parameters, a well known class of Distributed Replication-based Simple Storage (DRESS) codes is Fractional Repetition (FR) code. In such systems, the replicas of data packets encoded by Maximum Distance Separable (MDS) code, are stored on distributed nodes. Most of the available constructions for the FR codes are based on combinatorial designs and Graph theory. In this thesis, FR codes with generalized parameters (such as replication factor of each packet are not same and storage capacity of each node are also not same) are considered, and it is called as Generalized Fractional Repetition (GFR) code. For the GFR code, we propose an elegant sequence-based approach for the construction of the GFR code called Flower codes. Further, it is shown that any GFR code is equivalent to a Flower code. The condition for the universally good GFR code is given on such sequences. For some sequences, the universally good GFR codes are explored. In general, for the GFR codes with non-uniform parameters, bounds on the GFR code rate and DSS code rate are studied. Further, we have shown that a GFR code corresponds to a hypergraph. Using the correspondence, properties and bounds of a hypergraph are directly mapped to the associated GFR code. In general, necessary and sufficient conditions for the existence of a GFR code is obtained using the correspondence. It is also shown that any GFR code associated with a linear hypergraph is universally good.Item Open Access Investigation into radiation hardening techniques on differential receiver and power management unit in 0.8um CMOS for space applications(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Kasodniya, Sanjay Kumar; Mishra, BiswajitThis report details out radiation hardening techniques, their implementation on Di erential Receiver and Power Management Unit. Dierential Receiver ASIC is designed with addressable, synchronous and asynchronous features called dressableSynchronous/asynchronous Di erential Receiver (ASDR) ASIC. Onboard Payload data handling subsystems use standard bus interfaces (RS422) and protocols (eg. UART) and are available in separate devices. The proposed ASDR implements the RS422 electrical interface for di erential-serial-data reception and has multi-mode synchronous asynchronous serial data handling protocol, as a single chip solution. The design has 5 bit self-address feature, which is useful if these devices are used in multi-drop con guration. Such a single chip solution is of importance for ground based application, in space and related applications. This design is fabricated with 0.18 m CMOS process. Radiation hardening techniques, guard-ring, node-splitting and di erential-charge-cancellation have been implemented in the ASIC. After fabrication, the di erential receiver ASIC has been tested for radiation environment speci c to Single Event E ects and Total Ionizing Dose, with an aim to make it suitable for space applications. To the best of our knowledge this is the rst ever integrated chip that provide interface (both Tx and Rx con guration) with protocol (UART and synchronous serial to parallel) for low speed di erential data communication. A capacitive power management unit (PMU) for a DC energy harvester such as a photovoltaic (PV) is proposed. It is assumed that the input will have a minimal voltage requirement of approximately 460mV and can go up to 800mV , typical output from a PV cell. This is our initial attempt to design a novel PMU based on standard 0:18 m CMOS models to be used for applications (WSN, payload sensors) that require energy autonomy. Power management unit is designed to interface logic circuit with energy harvesters. Radiation hardening technique is used to make this design suitable for space applications. The temperature sensing system on a satellite can use ASDR and PMU for wired and wireless approach respectively.Item Open Access Microblog processing : summarization and impoliteness detection(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Modha, Sandip Jayantilal; Majumder, PrasenjitSocial Media is an excellent source for studying human interaction and behavior. Sensing social media such as Facebook and Twitter, by the smart autonomous application empower its user community with real-time information unfolds across different part of the world. In this thesis, we study social media text from the summarization and impoliteness perspective. In the first part of the thesis, Microblog Summarization is explored from the three scenarios. In the first scenario, we present a summarization system, built over the Twitter stream, to summarize the topic for a given duration. Daily summary or digest from Microblog is a way to update social media users what happened today on the subject of her interest. To design a Microblog based summarization system, Tweet ranking is the primary task. After ranking tweets, relevant tweet selection is the crucial task for any summarization system due to the massive volume of tweets in the Twitter stream. In addition, the Summarization system should include novel tweets in the summary or digest. The measure of relevance is typically the similarity score obtained from different text similarity (between user information need and tweets) algorithms. More similar, the higher the score. So, we need to choose a threshold that can minimize false-positive judgments in this case. We have developed various methods by exploiting statistical features of the rank list to estimate these thresholds and evaluated against thresholds determined via grid search. We have used language models to rank the tweets to select relevant tweets where the selection of the smoothing technique and its parameters are critical. Results are also compared with the standard probabilistic ranking algorithm BM25. Learning to Rank strategies are also implemented, which show substantial improvement in some of the result metrics. In the second scenario: we develop a real-time version of the summarization system that continually monitors the Twitter stream and generates relevant and novel real time push notifications that are delivered to user's cellphones. In the third scenario, the summarization system was evaluated on a disaster-related incident such as an earthquake. We have also performed comprehensive failure analysis on our experiment and identified key issues that can be addressed in the future. In the second part of the thesis, the social media stream is studied from the impoliteness perspective. Due to an exponential rise in the social media user-base, incidents like Hate speech, trolling, cyberbullying are also increasing, and that has lead to Hate Speech detection problems being reshaped into different research problems such as aggression detection, offensive language detection, factual-post detection. We refer to all such anti-social typology under the ambit of impoliteness. This thesis attempts to study the effectiveness of different text representation schemes on an NLP downstream task such as classification. A set of text representation schemes, based on Bag-of-Word techniques, distributed word representation or word embedding, sentence embedding, are empirically evaluated on traditional classifiers and deep neural models. Experiment results show that on the English dataset, overall, text representation using Googles' universal sentence encoder (USE) performs better than word embedding, and BoW techniques on traditional classifiers such as SVM, while pre-trained word embedding models perform better on classifiers based on the deep neural models. Recent pre-trained transfer learning models like Elmo, ULMFiT, and BERT are fine-tuned for the aggression classification task. However, results are not at par with the pre-trained word embedding model. Overall, word embedding using fastText produces best weighted F1-score than Word2Vec and Glove. On the Hindi dataset, BoW techniques perform better thanword embedding on traditional classifiers such as SVM, while pre-trained word embedding models perform better on classifiers based on the deep neural nets. Statistical significance tests are employed to ensure the significance of the dataset, deep neural models are more robust and perform substantially better than traditional classifiers such as SVM, logistic regression, and Naive Bayes classifiers. During the disaster-related incident, Twitter is flooded with millions of posts. In such emergencies, identification of factual posts is vital for organizations involved in the relief operation. We approach this problem as a combination of classification and ranking problems. Following from this work, the aggression visualization problem is addressed as the last component. We have designed a user interface based on web browser plugins over Facebook and Twitter to visualize the aggressive comments posted by any user. This plugin interface might help the security agencies to keep a tab on the social media stream. The proposed plugin help celebrities to customize their timeline by raising the appropriate flag, which enables them to delete or hide such abusive comments from their timeline. In addition to these, the interface might be helpful to the research community to prepare weakly labeled training data in a few minutes using comments posted by users on celebrity's social media timeline.Item Open Access Hybrid partitioning and distribution of RDF data(Dhirubhai Ambani Institute of Information and Communication Technology, 2018) Padiya, Trupti; Bhise, MinalRDF is a standard model by W3C specifically designed for data interchange on the web. RDF was established and used for the development of the semantic web. However, nowadays RDF data is being used for diverse domains and is not limited to the semantic web. Tremendous increase is witnessed in RDF data due to its applications in various domains. With growing RDF data it is vital to manage this data efficiently. The thesis aims at efficient storage and faster querying of RDF data using various data partitioning techniques. The thesis studies the problem of basic data partitioning techniques for RDF data storage and proposes the use of hybrid data partitioning in centralized and distributed environment as a part of the solution to store and query RDF data. The dissertation emphasizes on efficient data storage and faster query execution for stationary RDF data. It demonstrates basic data partitioning techniques like PT (Property Table), BT (Binary Table), HP (Horizontally Partitioned Table), and use of MV (Materialized Views) over BT. Even though basic data partitioning techniques outperforms TT (Triple Table) they suffer from various performances issues. The thesis gives a detailed insight into advantages and disadvantages of basic data partitioning techniques. Consequently, it proposes hybrid solutions for data partitioning by exploiting the best of available techniques. It proposes three hybrid data partitioning techniques namely DAHP (Data-Aware Hybrid Partitioning), DASIVP (Data-Aware Structure Indexed Vertical Partitioning) and WAHP (Workload-Aware Hybrid Partitioning). DAHP and WAHP are a combination of PT and BT whereas DASIVP combines structure index partitioning with BT. DAHP and DASIVP consider a data-aware approach and WAHP considers a workload-aware approach. Data-aware approach stores RDF data based on how the data is related to each other in the dataset and workload-aware approach stores RDF data based on how the data that is queried together. The thesis demonstrates detailed evaluation of query perform ance and data storage for all the data partitioning techniques. Query performances for these data partitioning techniques are evaluated in terms of QET (Query Execution Time). It calculates break-even point for all the data partitioning techniques. Hybrid data partitioning techniques have shown significant improvement over basic data partitioning techniques. A set of metrics is devised which can help to consider the suitability of given data partitioning technique for a RDF dataset. RDF data has increased to a point where it is difficult to manage this data on a single machine. It is necessary to distribute the data on different nodes and process it in parallel so that efficient query performance can be achieved. Data distribution and parallel processing of queries may generate many intermediate results which will involve communication among nodes. It becomes necessary to minimize inter-node communication among nodes in order to achieve faster execution of queries. This work presents a solution to manage RDF data in a distributed environment using a proposed hybrid technique. The solution aims at efficient RDF data storage and faster query execution by minimizing inter-node communication among nodes. Finally, the dissertation proposes DWAHP (Workload-Aware Hybrid Partitioning and Distribution) which exploits query workload and distributes data among nodes. DWAHP has two phases: Phase 1 considers Workload-Aware Hybrid Partitioning technique which generates workload-aware clusters consisti ng of PT and BT. Phase 2 considers a distribution scheme that distributes data among nodes using an n-hop Property Reachability Matrix. DWAHP Phase 1 helps in reducing number of joins, as it keeps the data which is queried together as a separate partition. DWAHP Phase 2 helps in diminishing inter-node communication among nodes with the use of an n-hop Property Reachability Matrix. The thesis demonstrates DWAHP and analyzes its query performance in terms of query execution time, query cost, storage space, and inter-node communication. Queries on RDF data mostly involve star and linear query patterns. DWAHP manages joins such that it is able to answer all linear and star queries without inter-node communication. DWAHP is compared with a state-of-the-art solution. It outperforms the state-of-the-art solution with 72% of faster query execution time, 61% of reduced query cost by occupying less than one-third of storage space. Increase in RDF data is witnessed as RDF data is being used in diverse domains. Discussed partitioning techniques can be utilized for various RDF stores. Data-aware RDF stores can be utilized for applications when data characteristics are known and workload-aware RDF stores can be utilized when data queries are known in advance.Item Open Access MMIC based high power transmit/receive and receive protection switches with integrated LNAs(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Rao, Ch. V. Narasimha; Ghodgaonkar, Deepak K.The conventional high power microwave signal switching component is the Silicon/GaAs PIN diode, while the recently GaN pHEMT devices are being used for this application. GaAs FET, used as switching element as a PIN diode replacement, has the advantages of having fast switching speeds, simplified bias networks, monolithic compatibility, and lower power consumption driver circuitry. The major advantage of having a GaAs FET based high power T/R switch or protection switch is that other functionalities of Receiver can be integrated on to MMIC (Monolithic Microwave Integrated Circuit) making a multi-functional core-chip. However, the power performance of a FET is limited by its current-handling capability in its low-impedance state and by its breakdown voltage in its high-impedance state. In this thesis, various new and novel circuit architectures are presented for increasing the power handling capability of GaAs FET based switches, using low noise and low power processes, to enable the realization of high power T/R switch with integrated LNA or absorptive high power receive protection switch with integrated LNA. So developed technique is employed for designing high power GaN FET based switches to further increase the power handling capability, beyond that of individual GaN FET switch, and also integrating LNA using the same GaN process. On-the-chip current distributed architecture, for increasing the power handling capability, is proposed, analyzed and employed for realizing a GaAs MMIC 10W T/R switch with integrated LNA, employing 0.25-?m GaAs pHEMT process (PH25 of M/s UMS, France). The measured transmit loss, Noise Figure (NF) and receive path gain are 1.0 dB, 2.5 dB and 5.6 dB respectively over 9.3-9.9 GHz. Novel impedance transformation along with on-the-chip current distribution technique, for increasing the power handling capability and improving the receive path loss, is proposed, analyzed, and employed for designing a GaN MMIC 200W T/R switch with integrated LNA, using 0.25-?m GaN pHEMT process (GH25 of M/s UMS, France). The layout level electromagnetic and co-simulation results of this 200W pulsed power handling capability T/R switch with integrated LNA are 0.8 dB transmit path loss with 45 dB receive isolation, and 2.6 dB NF with 20 dB gain for the receive path over 3.1-3.3 GHz. Also, in this thesis, novel FET stacking along with on-the-chip current distributed architecture, for increasing the power handling capability and improving the receive path loss, is proposed, analyzed and employed for realizing a GaAs MMIC 20W absorptive receive protection switch with integrated LNA, employing 0.13-?m GaAs pHEMT process (D01PHS of M/s OMMIC, France). The measured results are protection up to 20W with 28 dB receive isolation, 2.9 dB NF and gain of 20 dB over 9.3-9.9 GHz. The constituent components required for designing T/R Switch with LNA, viz., high power quadrature hybrids, high power switches, LNAs are studied and design details presented. Various high power MMIC quadrature hybrid configurations have been studied and the design, analysis and simulation results of compact distributed high power MMIC spiral hybrid and high power MMIC quasi lumped impedance transforming hybrid are presented. MMIC GaAs and GaN HEMT based switch configurations have been studied vis-a-vis the power handling capability and novel techniques like on-the-chip current distributed architecture for increasing the power handling capability and techniques of impedance transformation and FET stacking techniques for improving the receive path insertion loss are proposed, analyzed and simulation results are presented. MMIC GaAs HEMT based Low Noise Amplifiers' configurations have been studied and X-band single stage and two-stage LNA design, simulations and measurement results are presented. MMIC GaN HEMT based S-band Low Noise Amplifier design and simulation results are presented.Item Open Access Investigation into a low cost low energy IoT enabled wireless sensor node for particulate matter prediction for environmental applications(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Shah, Jalpa Bharatkumar; Mishra, BiswajitIn recent years, increased transportation, removal of trees for making buildings, establishment of new industries, are the main sources of increased pollution. Increased pollution is one of the major challenge faced by all countries as it a ects environment and human health. On of the way to deal with this challenge is monitoring the environment quality and taking corrective steps for the same. The conventional instruments used for environment monitoring are accurate but costly, time consuming, requires human intervention and lacking in terms of portability. Internet of Things (IoT) enabled wireless sensor node is one of the ideal solutions for real time monitoring of environment in today's urban ecosystems. We have developed a low power IoT enabled wireless sensing and monitoring platform for simultaneous monitoring, real time data of ten di erent environmental parameters such as Temperature, Relative Humidity, Light, Barometric Pressure, Altitude, Carbon dioxide (CO2), Volatile Organic Compounds (VOCs), Carbon Monoxide (CO), Nitrogen Dioxide (NO2) and Ammonia (NH3). We have tried to achieve low power through modi cation in sensor node hardware architecture and developing prediction model which eliminates the need of power hungry sensor. The proposed hardware architecture for wireless sensor node helps in reducing power and number of interfacing pins required from the microcontroller. The proposed wireless sensor node architecture is also adaptable for any other applications after replacement or removal of sensors and/or modi cation of supply. The developed system consists of the transmitter node and the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone and enables IoT. The system is validated through experiments and deployment for real time monitoring. For the proposed system reliability of transmission achieved is 97.4%. Power consumption of the sensor node is quanti ed which is equal to 25.67mW and can be varied by varying the sleep time or sampling time of the node. Battery life of approximately 31 months can be achieved for the measurement cycle of 60 secs. PM2.5 is one of the important pollutants for measuring air quality. Existing methods and instruments used for the measurement of PM2.5 are more laborious, not applicable for both online and o ine, having response time from a few minutes to hours and lacking in terms of portability. In this work we present the correlation study of PM2.5 with other pollutants based on the data received by Central Pollution Control Board (CPCB) online station at N 23 0' 6.6287, E 72 35' 48.7816. Based on the correlation results, CO, NO2, SO2 and VOC parameters (Benzene, Toluene, Ethyl Benzene, M+P Xylene, O-Xylene) are selected as predictors for developing PM2.5 prediction model. PM2.5 prediction model is developed using Arti cial Neural Network (ANN), resulting in a simple analytical equation. Since the proposed model is expressed in simple mathematical equation, it can be deployed on a wireless sensor node enabling online monitoring of PM2.5. Closeness of predicted and actual values of PM2.5 are veri ed through processing derived model equations using low cost processing tool (e.g. excel sheet), thereby eliminating the need for proprietary tools. The RMSE and regression coe cient of the derived model is 1.7973µg/m3 and 0.9986 respectively. Predicted and actual values of PM2.5 are found very close to each other and variation is in the acceptable range. Derived model is recalibrated in terms of predictors and coe cients to test it, in a di erent city, using data of developed low power wireless sensor node. Based on the availability of the sensors on wireless sensor node, recalibration is done for the reduction of predictors to three; CO, NO2 and VOC. For recalibrated model, results show RMSE of 7.5372 µg/m3 and R2 0.9708. The obtained results show the feasibility and e ectiveness of the proposed approach. Improvement in these results is possible by recalibrating prediction model based on data from multiple stations at the place of deployment. Predicted model can be used for online or o ine measurement. Time involved in the measurement is less compared to conventional methods, which is equal to the processing time of the equations. To provide accurate results proposed wireless sensor node is calibrated against the standard calibrated instruments. The proposed system has advantages over conventional methods such as less costly, automated, portable, less time consuming and having higher temporal and spatial resolution.Item Open Access On designing DNA codes and their applications(Dhirubhai Ambani Institute of Information and Communication Technology, 2019) Limbachiya, Dixita; Gupta, Manish K.Bio-computing uses the complexes of biomolecules such as DNA (Deoxyribonucleic acid), RNA (Ribonucleic acid) and proteins to perform the computational processes for encoding and processing the data. In 1994, L. Adleman introduced the field of DNA computing by solving an instance of the Hamiltonian path problem using the bunch of DNA sequences and biotechnology lab methods. An idea of DNA hybridization was used to perform this experiment. DNA hybridization is a backbone for any computation using the DNA sequences. However, it is also cause of errors. To use the DNA for computing, a specific set of the DNA sequences (DNA codes) which satisfies particular properties (DNA codes constraints) that avoid cross-hybridization are designed to perform a particular task. Contributions of this dissertation can be broadly divided into two parts as 1) Designing the DNA codes by using algebraic coding theory. 2) Codes for DNA data storage systems to encode the data in the DNA. The main research objective in designing the DNA codes over the quaternary alphabets {A, C, G, T}, is to find the largest possible set of M codewords each of length n such that they are at least at the distance d and satisfies the desired constraints which are feasible with respect to practical implementation. In the literature, various computational and theoretical approaches have been used to design a set of DNA codes which are sufficiently dissimilar. Furthermore, DNA codes are constructed using coding theoretic approaches using fields and rings. In this dissertation, one such approach is used to generate the DNA codes from the ring R = Z4 + wZ4, where w2 = 2 + 2w. Some of the algebraic properties of the ring R are explored. In order to define an isometry from the elements of the ring R to DNA, a new distance called Gau distance is defined. The Gau distance motivates the distance preserving map called Gau map f. Linear and closure properties of the Gau map are obtained. General conditions on the generator matrix over the ring R to satisfy reverse and reverse complement constraints on the DNA code are derived. Using this map, several new classes of the DNA codes which satisfies the Hamming distance, reverse and reverse complement constraints are given. The families of the DNA codes via the Simplex type codes, first order and rth order Reed-Muller type codes and Octa type codes are developed. Some of the general results on the generator matrix to satisfy the reverse and reverse complement constraints are given. Some of the constructed DNA codes are optimal with respect to the bounds on M, the size of the code. These DNA codes can be used for a myriad of applications, one of which is data storage. DNA is stable, robust and reliable. Theoretically, it is estimated that one gram of DNA can store 455 EB (1 Exabyte = 1018 bytes). These properties make the DNA a potential candidate for data storage. However, there are various practical constraints for the DNA data storage system. In this work, we construct DNA codes with some of the DNA constraints to design efficient codes to store data in DNA. One of the practical constraints in designing DNA codes for storage is the repeated bases (runlengths) of the same DNA nucleotides. Hence, it is essential that each DNA codeword should avoid long runlengths. In this thesis, codes are proposed for data storage that will dis-allow runlengths of any base to develop DNA data storage error-free codes. A fixed GC-weight u (the occurrence of G and C nucleotides in a DNA codeword) is another requirement for DNA codewords used in DNA storage. DNA codewords with large GC-weight lead to insertion and deletion (indel) errors in DNA reading and amplification process thus, it is crucial to consider a fixed GCweight for DNA code. In this work, we propose methods that generate families of codes for the DNA data storage systems that satisfy no-runlength and fixed GC-weight constraints for the DNA codewords used for data storage. The first is the constrained codes which use the quaternary code and the second is DNA Golay subcodes that use the ternary encoding. The constrained quaternary coding is presented to generate DNA codes for the data storage. We give a construction algorithm for finding families of DNA codes with the no-runlength and fixed GC-weight constraints. The number of DNA codewords of fixed GC-weight with the no-runlength constraint is enumerated. We note that the prior work only gave bounds on the number of such codewords while in this work we count the number of these DNA codewords exactly. We observe that the bound mentioned in the previous work does not take into account the distance of the code which is essential for data reliability. Thus, we consider distance to obtain a lower bound on the number of codewords along with the fixed GC-weight and no-runlength constraints. In the second method, we demonstrate the Golay subcode method to encode the data in a variable chunk architecture of the DNA using ternary encoding. N.Goldman et al. introduced the first proof of concept of the DNA data storage in 2013 by encoding the data without using error correction in the DNA which motivated us to implement this method. While implementing this method, a bottleneck of this approach was identified which limited the amount of data that can be encoded due to fix length chunk architecture used for data encoding. In this work, we propose a modified scheme using a non-linear family of ternary codes based on the Golay subcode that includes flexible length chunk architecture for data encoding in DNA. By using the Golay ternary subcode, two substitution errors can be corrected. In a nutshell, the significant contributions of this thesis are designing DNA codes with specific constraints. First, DNA codes from the ring using algebraic coding by defining a new type of distance (Gau distance) and map (Gau map) are proposed. These DNA codes satisfy reverse, reverse complement and complement with the minimum Hamming distance constraints. Several families of these DNA codes and their properties are studied. Second, DNA codes using constrained coding and Golay subcode method are developed that satisfy norunlength and GC-weight constraints for a DNA data storage system.