M Tech Dissertations
Permanent URI for this collectionhttp://drsr.daiict.ac.in/handle/123456789/3
Browse
12 results
Search Results
Item Open Access Modelling and short term forecasting of flash floods in an urban environment(Dhirubhai Ambani Institute of Information and Communication Technology, 2018) Ogale, Suraj; Srivastava, SanjayRapid urbanization, climate change and extreme rainfall has resulted in a growingnumber of cases of urban flash floods. It is important to predict the occurrenceof flood so that the aftermath of the flood can be minimized. Flood forecasting isa major exercise performed to determine the chances of a flood when suitableconditions are present. Short term forecasting or nowcasting is a dominant techniqueused in urban cities for prediction of the very near future incident up to sixhours. In orthodox methods of flood forecasting, current weather conditions areexamined using conventional methods such as use of radar, satellite imaging andcomplex calculation involving complicated mathematical equations.Recent developments in Information and Communication Technology(ICT) andMachine Learning(ML) has helped us to study this hydrological problem alongwith many real world situation in different perspective. The main aim of thisthesis is to design a theoretical model that accounts parameters causing an urbanflash flood and develop a prediction tool for the forecasting of near future event.To test the soundness of a model, data syntheses is performed and the results areseen using the artificial neural network.Item Open Access Secure SQL with access control for database as a service model(Dhirubhai Ambani Institute of Information and Communication Technology, 2014) Dave, Jay; Das, Manik LalRapid growth of internet and networking technology emerges "Software as a service model". In this model, application service providers (ASP) provides each functionality of software over internet. ASP provides access of software to users on internet. However, large data of a great number of users may raise problem of storage at ASP site. Database as a service model is more appropriate model for ASPs. This model allows all privileges of database to its users over internet. ASPs can store their large data on database provider. Database provider serves each functionality of database over network. However, this model raises problems of confidentiality of data. Confidential data of users are stored at untrusted database provider. Theft of sensitive data is possible at database provider site. An outside attacker can attack on database provider and snoops confidential data. Curious or malicious database administrator can also steal sensitive data. We studied present encryption schemes which provide confidentiality to database as a service model. First we studied scheme of Hakan, et al.[6], which provides security by storing encrypted form of whole tuple in database. However, this scheme results more computation at ASP site. Second scheme, CryptDB[7] does not have this problem. CryptDB provides security by encrypting data with different encryption methods. However, this scheme removes randomness of such cipher texts which do not need randomness removal. This issue results equality relation leakage and order relation leakage of cipher texts. We focused on solving these limitations and providing more secure scheme. We proposed solution to limitations of CryptDB. For that, ASP partitions attributes and encrypts each partition with different key. This solution makes sure by removing randomness from appropriate partition which contains such cipher texts (which need randomness removal). Cipher texts of other partition are secured with randomness. We elaborated all schemes with examples. We listed analysis of proposed solution to issue of CryptDB. We gave security proofs for our proposed solution. We also implemented a module of this scheme.Item Open Access Semantic web data management: data partitioning and query execution(Dhirubhai Ambani Institute of Information and Communication Technology, 2012) Padiya, Trupti; Bhise, MinalSemantic Web database is an RDF database. Due to increased use of Semantic Web in real life applications, we can find immense growth in the use of RDF databases. As there is a tremendous increase in RDF data, efficient management of this data at a larger scale, and query performance are two major concerns. RDF data can be stored using various storage techniques. The RDF data used for this experiment is FOAF dataset which is a social network data. Here we study and evaluate query performance for various storage techniques in terms of query execution time and scalability using FOAF data set. Thesis demonstrates effect of data partitioning techniques on query performance. For our experiments, we have used Triple Store, Property Tables, vertically and horizontally partitioned data store to store FOAF data. Experiments were performed to analyze query execution time for all these data stores. Partitioning techniques have been observed to make queries 168 times faster compared to Triple Stores. Materialized views are used to improve query performance further for the queries which are seen frequently for social web data. Materialized views have shown better query performance in terms of execution time which is 8 times faster than the partitioned data.Item Open Access SQL-GQL inter-query translation for Google App engine datastore(Dhirubhai Ambani Institute of Information and Communication Technology, 2012) Kotecha, Shyam; Bhise, MinalOn demand services, usage based pricing, and scalability features of cloud computing has attracted many customers to move their applications into cloud. But different cloud service providers are using different standards & frameworks to host applications & data. Customers have to follow these standards and frameworks. When customer wants to migrate application and/or data to another cloud service provider, application code and database structure must be modified according to the standard of new cloud service provider. This modification is very costly and as a consequence, changing cloud service provider becomes difficult. This situation is called vendor lock-in in cloud. Focusing on database, complete database migration requires migration of data, database schema, and query. This thesis work concentrates on migration of query. Automation in migration process is achieved by translation algorithms. This thesis work introduces inter-query translation algorithms. These algorithms translate SQL (Structured Query Language) query and GQL (Google Query Language) query into each other. The implementation of these algorithms is demonstrated for MySQL Sakila databaseItem Metadata only Service level agreement parameter matching in cloud computing(Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Chauhan, Tejas; Chaudhary, Sanjay; Bise, MinalCloud is a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or software services). It provides an on-demand, pay-asyo-ugo computing resources and had become an alternative to traditional IT infrastructure. As more and more consumers delegate their task to cloud providers, Service Level Agreement (SLA) between consumer and provider becomes an important aspect. Due to the dynamic nature of cloud the matching of service level agreement need to be dynamic and continuous monitoring of Quality of Service (QoS) is necessary to enforce SLAs. This complex nature of cloud warrants a sophisticated means of managing SLAs. SLA contains many parameters like cloud’s types of services, resources (physical memory, main memory, processor speed, ethernet speed etc.) and properties (availability, response time, server reboot time etc.). At present, actual Cloud SLAs are typically plain-text documents, and sometimes an informative document published online. Consumer needs to manually match application requirements with each and every cloud provider to identify compatible cloud provider. This work addresses the issue of matching SLA parameters to find best suitable cloud provider. Proposed algorithm identifies the compatible cloud provider by matching parameters of application requirements and cloud SLAs. It gives suggestion to a consumer in terms of number of matched parameters.Item Open Access Image ranking based on clustering(Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Sharma, Monika; Mitra, Suman K.In a typical content-based image retrieval (CBIR) system, query results are a set of images sorted by feature similarities with respect to the query. However, images with high feature similarities to the query may be very different from the query. We introduced a novel scheme to rank images, cluster based image ranking, which tackle this difference in query image and retrieved images based on hypothesis: semantically similar images tends to clustered in same cluster. Clustering approach attempts to capture the difference in query and retrieved images by learning the way that similar images belongs to same cluster. For clustering color moments based clustering approach is used. The moment is the weighted average intensity of pixels. The proposed method is to compute color Moments of separated R,G,B components of images as a feature to get information of the image. This information can be used further in its detail analysis or decision making systems by classification techniques. The moments define a relationship of that pixel with its neighbors. The set of moments computed will be feature vector of that image. After obtaining the feature vector of images, k-means classification technique is used to classify these vectors in k number of classes. Initial assignment of data to the cluster is not random, it is based on maximum connected components of images. The two types of features are used to cluster the images namely: block median based clustering and color moment based clustering. Experiments are performed using these features to analyze their effect on results. To demonstrate the effectiveness of the proposed method, a test database from retrieval result of LIRE search engine is used and result of Lire is used as base line. The results conclude that the proposed methods probably give better result than Lire result. All the experiments have been performed on in MATLAB(R). Wang database of 10000 images is used for retrieval. It can be downloaded from http://wang.ist.psu.edu/iwang/test1.tarItem Open Access Policy based resource allocation on infrastructure as a service cloud(Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Vora, Dhairya; Chaudhary, Sanjay; Bise, MinalCloud computing refers to the provision of computational resources on demand. Resource allocation is an important aspect in cloud computing. Cloud user asks for resources in terms of a lease. Lease stores the information about required resources and the time at which these resources are required. Cloud provider accepts the lease if it can provide guarantee for assigning resources at asked time to the cloud user. Better scheduling algorithm can accept more number of leases and hence give better resource utilization. Cloud provides four types of leases: immediate, advance reservation, best effort and deadline sensitive. Immediate allocation policy accepts the lease if resources are available, else it rejects the lease. Advance reservation policy accepts the lease if resources are available at the asked time, else it rejects the lease. Best effort allocation policy accepts the lease as soon as the resources are available. Deadline sensitive leases have parameters like required resources, startTime, endTime and duration. Scheduler can accept such lease by providing required resources for the asked duration of time between given startTime and endTime. Haizea is a resource lease manager which handles the scheduling of the lease. Proposed algorithm extends the current scheduling algorithm of Haizea for deadline sensitive type of leases. Aim of the thesis is to improve resource utilization by extending the current scheduling algorithms of Haizea. Proposed scheduling algorithm accepts more number of leases by dividing a deadline sensitive lease into multiple slots and by back filling already.Item Open Access Service integration on social network(Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Patel, Mehul; Chaudhary, Sanjay; Bise, MinalMicroblogging services are part of social network platforms, which allow people to exchange short messages. Social networks provide people to play an active role in collecting, analyzing and reporting news and information. People can use social network platform for marketing, buying and selling of their products. A sellers can tweet regarding product information including links of related photos, videos etc. A buyer can show interest in the product by means of tweets. Social network can be used as a mechanism to bring sellers and buyers closer. It provides a common platform for buyers and sellers to sell and buy their products. Microblogs can be parsed and analyzed to generate useful suggestions, e.g. sellers can be informed about potential buyers to get higher profit. Such information can be used to generate classified information to help users to take decision, e.g. minimum expected price of a crop that sellers expect in a given region. Microblogs can be written in different regional languages. Agro-produce marketing information can be processed and then stored in RDF/RDF(S) and OWL data store. SPARQL and conjunctive queries with pellet like reasoner or SPARQL-DL can be used to generate classified summarized information from RDF/RDF(S) and OWL data store.Item Open Access Migration of database from one cloud to other clouds(Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Bhatt, Shreyansh; Chaudhary, SanjayOn demand services and scalability features of cloud computing has attracted many customers to move their applications into cloud. Cloud service providers are following di erent standards to host applications and data. Data must be stored according the schema of a particular cloud service provider. A need can arise to migrate cloud application and/or data to another cloud service provider. In that case, the relevant code, and structure of database must be modi ed based on newly identi ed cloud service provider. Which is a costly deal and as a consequence, chang ing cloud service provider becomes di cult. This issue is regarded as vendor lock-in in terms of cloud computing. Current study will help to identify issues of migrating database between two clouds and development of novel techniques, which would facilitate this migration. For this, RDF /RDFS (Resource Description Framework/Resource Description Framework Schema) is used as an intermediate model. Automation in migration process is achieved by transformation algorithms. Bigtable, Google App Engine datastore, is taken as a cloud datastore and algorithms are developed and implemented to convert RDF/RDFS data into data that can be stored in Bigtable and vice versa. Results are shown for the same. Subsequently, the same algorithm is generalized to store RDF/RDFS data into any cloud datastore.Item Open Access Prolog based approach to reasoning about dynamic hierarchical key assignment schemes(Dhirubhai Ambani Institute of Information and Communication Technology, 2011) Mundra, Anil Kumar; Mathuria, Anish M.The problem of allowing the higher level users access the information related to lower level is called Hierarchical Access Control Problem. In a hierarchical access control system, users are partitioned into a number of classes - called security classes, which are organized in a hierarchy. Hierarchies arise in systems where some users have higher privileges than others and a security class inherits the privileges of its descendant classes. A basic Hierarchical Key Assignment Scheme is a method of assigning an encryption key to each class in the hierarchy. In literature, there are number of such hierarchy schemes are available and many of them have formal proof models for security properties. Now a days mostly all the schemes have a solution for Dynamic Access Control problem. We found that for dynamic schemes no formal proof model is available so we can not make any arguments on security properties of such schemes. We present a new approach for automatic veri cation using Prolog for the analysis of existing dynamic and static hierarchical key assignment schemes and verify their security properties. We discover some new attacks on existing schemes and proposed a new scheme to overcome those attacks.