M Tech Dissertations

Permanent URI for this collectionhttp://drsr.daiict.ac.in/handle/123456789/3

Browse

Search Results

Now showing 1 - 10 of 24
  • ItemOpen Access
    Automated Analysis of Natural Language Textual Specifications : Conformance and Non-Conformance with Requirement Templates (RTs)
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2023) Balwani, Shivani; Tiwari, Saurabh
    Natural Language (NL) is widely adopted as the primary method of expressingsoftware requirements, although determining its superiority is challenging. Em� irical evidence suggests that NL is the most commonly used notation in the in dustry for specifying requirements. One of the main advantages of NL is its ac� cessibility to various stakeholders, requiring minimal training for understandingdditionally, NL possesses universality, allowing its application across diverse roblem domains. However, the unrestricted use of NL requirements can result in ambiguities. To address this issue and restrict the usage of NL requirements, requirement Templates (RTs) are employed. RTs have a fixed syntactic structure and consist of predefined slots. When requirements are structured using RTs, en�suring they conform to the specified template is crucial.Manually verifying the conformity of requirements to RTs becomes a tedious task due to the large size of industry requirement documents, and it also intro� duces the possibility of errors. Furthermore, rewriting requirements to conform to the template structure when they initially do not conform presents a significant challenge. To overcome these issues, we propose a tool-assisted approach that automatically verifies whether Functional Requirements (FRs) conform to RTs. It provides a recommendation for a Template Non-Conformance (TNC) requireent by generating a semantically identical requirement that Conforms to th template structure. Our study focused on two well-known RTs, namely, Easy Ap� roach to Requirements Syntax (EARS) and RUPPs, for checking conformance and making recommendations. We utilized Natural Language Processing (NLP) techniques and applied our approach to industrial and publicly available case studies. Our results demonstrate that the tool-based approach facilitates requireent analysis and aids in recommending requirements based on their conformity ith RTs. Furthermore, we have developed an approach to assess Non-Functional Requirements (NFRs) testability by analyzing the associated acceptance criteria We evaluated the applicability of this approach by applying it to various casestudies and determining the testability of the NFRs.
  • ItemOpen Access
    Data blocking for partitioned data
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2018) Deore, Prajakta Balwant; Bhise, Minal
    Since last few years the data consumed and produced by various applications is increasing tremendously. This thesis aims to achieve faster query processing for this data. The overall work of the thesis is divided into three phases, data partitioning, data blocking, and data skipping. Data partitioning includes identifying hot and cold partitions of data and storing as separate data blocks. Partitioned data is stored contiguously on the disk and verified. Data blocking is storing the data blocks on disk such that all hot data blocks are stored together and all cold data blocks are stored together. Data skipping is performed in order to reduce the disk seek time while accessing the data form disk. Data partitioning and blocking is implemented on column oriented database system. Data blocking resulted in significant reduction in amount of data scanned and query response time. The results are obtained for query execution time on three different query categorization such as range queries, nested queries and aggregate queries. On an average for these three types of queries QET became 55 times faster for partitioned data. For the above query categorization data blocking and skipping on an average results in reduction of 97% data scan and hence by accelerates queries.
  • ItemOpen Access
    Exact algorithm for segment based sequence alignment
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2015) Tailor, Divyesh; Divakaran, Srikrishnan
    In bioinformatics, proteins, large biological molecules consisting long chain of amino acids, are described by a sequence of 20 amino acids. To analyze these protein sequences pairwise alignment is being used, which identify regions of similarity that may be a result of structural, functional and/or evolutionary relationship between them. Traditional pairwise alignment algorithms work on residue level; it does not account structural or functional information that protein carries. A new approach for protein sequence analysis is being proposed here, pairwise alignment of two protein sequences based on segments. Segments of the sequences can be formed on the basis of protein feature, i.e., functional sites or secondary structure of the protein. Each segment carries a type and weight for the alignment process. Algorithm should align two sequences such that segments with weight higher than threshold value must align with the similar type of segment and score for the alignment must be maximal for given scoring function. Here, we are proposing a generic framework to understand, explore and experiment proteins based on their features, i.e. structure, function, and evolution.
  • ItemOpen Access
    Precision agriculture using wireless sensor network
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2015) Joshi, Nikita Rajeshbhai; Shrivastava, Sanjay
    Farming practices should evolve with the rapid increase in population. Recent growth in wireless sensor network (WSN) has the capability to meet this objective. Better quality in crop production can be achieved using real-time data collected through WSN. Also, greenhouse allows farming in a controlled environment. Hence, a combination of WSN and greenhouse gives better quality crop yield. Greenhouse requires climate control and fertigation management. Fertigation is a combination of irrigation and fertilization. Existing architectures for greenhouse management collect data of various parameters using sensor nodes and control values of parameters using actuators. These architectures have very limited capability to handle faults in sensors and actuators. Deployment of sensor nodes in these architectures is crop dependent. Therefore, while changing crops, modifications in the location of sensor node is needed and details of this modification should be entered manually in the database. Thus, they are not flexible architectures. In our work, a WSN based architecture for controlled environment like greenhouse is designed. This architecture provides for actuator control using crop requirements stored in the database. The architecture provides for deployment strategies for sensor nodes and actuators using the details about bed size, crop requirements etc. Localization algorithm is used to find the exact location of the sensor node. The architecture is flexible such that whenever location of sensors nodes needs to be changed, location of sensor nodes will be automatically detected using the localization algorithm. We have designed an algorithm to detect faults in sensor nodes and actuators. These faults are isolated or reported to the user. The architecture provides for network management strategies to control energy consumption of sensor nodes which eventually helps in increasing network lifetime. WSN algorithms for sleep scheduling and localization are used to support these features.We designed a system for a specific group of crops namely tomato, capsicum and cucumber using the architecture. This system is simulated in NS2 and it is verified that system is working as expected.
  • ItemOpen Access
    Analysis of various DFT techniques in the ASIC designs
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2013) Rathod, Gayatri Manohar; Bhatt, Amit
    With the increasing demand of mobile communication industry and highly progressive VLSI technology, muti-million gates silicon chips are in the market. And to have fault free, reliable chips, extended facility of testable circuit has to be added into the original design. The design technique which includes the testability logic into the design at the logical synthesis level is known as Design for Test abbreviated as DFT [1]. To achieve better fault coverage, I have chosen full scan chain insertion technique for OR1200 design. OR1200 is a 32-bit microprocessor with 5-stage pipeline [12]. Its RTL code is taken from opencores.org and Cadence RTL Compiler version 11.1 is used for logic synthesis. For testing and verification, Encounter Test Version 11.1 and NCVerilog simulator is used. To improve the testability of the design, Deterministic fault analysis and Random Resistant Fault Analysis techniques are also added to the design. Effects of all hardware DFT techniques are analysed in terms of area, dynamic power dissipation and gate count. Main low power technique i.e. clock gating is also inserted along with DFT to achieve better performance in terms of power dissipation. DFT causes 25% of increase in die area and 12% of increase in dynamic power. This is acceptable as we will get OR1200 design with 99.67% fault coverage area.
  • ItemOpen Access
    Exploring suitable classifier for robust face and facial expression recognition.
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2013) Jain, Deshna; Mitra, Suman K.
    Face recognition by machines has been studied since last few decades and the problem is attempted to be solved by various ways. However any robust solution is not acheived yet by the researchers due to numerous challenges involved like illumination changes, pose variation, occlusion, cluttered background, noncooperation of the subject and ageing effect on human face. We have worked by modelling the problem as a pattern recogniton problem. The solution of this problem involves mainly three steps: (a) Face detection and segmentation, (b) Feature extraction, and (c) Classification or recognition. We have worked on finding the robust classifier for face and facial expression recognition. Naive Bayes Classifier (NBC) is the statistical classifier that works by estimating the maximum probability of the possible classes to which the testing data point may belong assuming that the features are mutually independent. It makes use of Bayes rule for likelihood computation. This approach works well if the distribution of the features is known accurately. Otherwise, probability distribution of the features belonging to corresponding classes has to be estimated with density estimation techniques. Here features are assumed to follow Gaussian distribution. Experiments are done for classifying faces from YALE face database and DAIICT database, taking ELPP coefficients as the features. Another classifier we used is Support Vector Machine (SVM) that works by finding the decision plane between two classes. It finds the decision plane with the help of support vectors having maximum margin between them. Experiments performed with SVM give better results than NBC for both DAIICT and YALE face database. While using NBC, one of the estimation techniques that is used in this work is Kernel Density Estimation also known as Parzen window. The approach estimates the density of a point for a given dataset with a global bandwidth. This classification technique is used for face recognition using YALE face database and DAIICT database. For DAIICT database the estimation method shows different results for the same dataset with different parameters whereas no significant results are obtained for YALE face database. On the other hand, in the whole algorithm there is no measure of best fit of the estimatd curve involved. These issues are resolved by using Pearson’s chi-squared test for testing goodness of fit of the estimation with changing parameters of the selected bandwidth. In addition to this, bandwidth is kept dynamic by computing it with neighboring datapoints instead of keeping it global. This approach performs better than the former one for YALE face database and equivalent for DAIICT database. The experiments are extended for classifying the facial expressions as well. A comparision of KNN, NBC, proposed approach for NBC and SVM is presented in the work. SVM outperformed all the classifiers for both the databases.
  • ItemOpen Access
    Fingerprint image preprocessing for robust recognition
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2012) Munshi, Paridhi; Mitra, Suman K
    Fingerprint is the oldest and most widely used form of biometric identification. Since they are mainly used in forensic science, accuracy in the fingerprint identification is highly important. This accuracy is dependent on the quality of image. Most of the fingerprint identification systems are based on minutiae matching and a critical step in correct matching of fingerprint minutiae is to reliably extract minutiae from the fingerprint images. However, fingerprint images may not be of good quality. They may be degraded and corrupted due to variations in skin, pressure and impression conditions. Most of the feature extraction algorithms work on binary images instead of the gray scale image and results of the feature extraction depends upon the quality of binary image used. Keeping these points in mind, image preprocessing including enhancement and binarization is proposed in this work. This preprocessing is employed prior to minutiae extraction to obtain a more reliable estimation of minutiae locations and hence to get a robust matching performance. In this dissertation, we give an introduction to the ngerprint structure and identification system . A discussion on the proposed methodology and implementation of technique for fingerprint image enhancement is given. Then a rough-set based method for binarization is proposed followed by the discussion on the methods for minutiae extraction. Experiments are conducted on real fingerprint images to evaluate the performance of the implemented techniques.
  • ItemOpen Access
    Study of the effectiveness of various low power techniques on sequential and combinational gate dominated designs
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2012) Rana, Kunj; Bhatt, Amit
    In last decade, the technological advancement is seen in semiconductor field like never before. The need for low power has caused a major paradigm shift where power dissipation has become as important consideration as performance and area. The size of the electronic equipments is getting smaller and smaller which requires smaller integrated circuits (ICs). Due to this the power consumption happens to be a major concern in developing the smaller ICs. The objective of the dissertation is to develop a low power digital design flow using Cadence® tools. This report discusses various strategies and methods for designing low power circuits and systems. It describes the many issues facing designers at various levels and presents some of the techniques that have been proposed to overcome these difficulties. To do this, particular RTL (Verilog code) is taken for some design. First various floorplans are tested on the design for better power number then using the same design, analysis on two different interconnect estimation model is done. Finally using the floorplan and interconnect estimation model analysis results low power implementation is done for the same design which is passed through various steps of digital design flow like synthesis, floor planning, placement, routing, and converted to GDSII (Graphic Database System) file format which can be directly sent to foundry. In low power implementation several techniques like clock gating, operand isolation, and multi Vt cells are used with some enhancement switches provided by the tool
  • ItemOpen Access
    Image ranking based on clustering
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Sharma, Monika; Mitra, Suman K.
    In a typical content-based image retrieval (CBIR) system, query results are a set of images sorted by feature similarities with respect to the query. However, images with high feature similarities to the query may be very different from the query. We introduced a novel scheme to rank images, cluster based image ranking, which tackle this difference in query image and retrieved images based on hypothesis: semantically similar images tends to clustered in same cluster. Clustering approach attempts to capture the difference in query and retrieved images by learning the way that similar images belongs to same cluster. For clustering color moments based clustering approach is used. The moment is the weighted average intensity of pixels. The proposed method is to compute color Moments of separated R,G,B components of images as a feature to get information of the image. This information can be used further in its detail analysis or decision making systems by classification techniques. The moments define a relationship of that pixel with its neighbors. The set of moments computed will be feature vector of that image. After obtaining the feature vector of images, k-means classification technique is used to classify these vectors in k number of classes. Initial assignment of data to the cluster is not random, it is based on maximum connected components of images. The two types of features are used to cluster the images namely: block median based clustering and color moment based clustering. Experiments are performed using these features to analyze their effect on results. To demonstrate the effectiveness of the proposed method, a test database from retrieval result of LIRE search engine is used and result of Lire is used as base line. The results conclude that the proposed methods probably give better result than Lire result. All the experiments have been performed on in MATLAB(R). Wang database of 10000 images is used for retrieval. It can be downloaded from http://wang.ist.psu.edu/iwang/test1.tar
  • ItemOpen Access
    Shallow parsing of Gujarati text
    (Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Dave, Vidhi; Pandya, Abhinay
    Shallow parsing is the process of assigning tag to minimal, non recursive phrase of the sentence. It is useful for many applications like question answering system, information retrieval where there is no need of full parsing. Gujarati is one of the main languages of India and 26th most spoken native language in the world. There are more than 50 million speakers of Gujarati language worldwide. Natural language processing of Gujarati is in its infancy. Now days there are many data available in Gujarati on websites but due to lack of resources it is hard for users to retrieve it efficiently. So, shallow parsing of Gujarati can make task easier for another tasks like machine translation, information extraction and retrieval. In this thesis, we have worked on the automatic annotation of Shallow Parsing of Gujarati. 400 sentences have been manually tagged. Different Machine Learning techniques namely Hidden Markov Model and Conditional Random Field have been used. We achieved good accuracy and it is similar to Hindi chunker even though resources available for Gujarati are very less. The best performance is achieved using CRF with contextual information and Part-of-speech tags.