You are on page 1of 7

#56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank,Vijaynagar,Bangalore-560040.

Website: www.citlprojects.com, Email ID: projects@citlindia.com,hr@citlindia.com

MOB: 9886173099 / 9986709224, PH : 080 -23208045 / 23207367 DOTNET 2013


(Networking, Network-Security, Mobile Computing, Cloud Computing, Wireless Sensor Network, Datamining, Webmining, Artificial Intelligence, Vanet, Ad-Hoc Network)

CLOUD COMPUTING
NO PRJ TITLE
Rethinking Vehicular Communications: Merging VANET with cloud computing

ABSTRACT
Despite the surge in Vehicular Ad Hoc NETwork (VANET) research, future high-end vehicles are expected to under-utilize the on-board computation, communication, and storage resources. Olariu et al. envisioned the next paradigm shift from conventional VANET to Vehicular Cloud Computing (VCC) by merging VANET with cloud computing. But to date, in the literature, there is no solid architecture for cloud computing from VANET standpoint. In this paper, we put forth the taxonomy of VANET based cloud computing. It is, to the best of our knowledge, the first effort to define VANET Cloud architecture. Additionally we divide VANET clouds into three architectural frameworks named Vehicular Clouds (VC), Vehicles using Clouds (VuC), and Hybrid Vehicular Clouds (HVC). We also outline the unique security and privacy issues and research challenges in VANET clouds Cloud computing is an increasingly important solution for providing services deployed in dynamically scalable cloud networks. Services in the cloud computing networks may be virtualized with specific servers which host abstracted details. Some of the servers are active and available, while others are busy or heavy loaded, and the remaining are offline for various reasons. Users would expect the right and available servers to complete their application requirements. Therefore, in order to provide an effective control scheme with parameter guidance for cloud resource services, failure detection is essential to meet users' service expectations. It can resolve possible performance bottlenecks in providing the virtual service for the cloud computing networks. Most existing Failure Detector (FD) schemes do not automatically adjust their detection service parameters for the dynamic network conditions, thus they couldn't be used for actual application. This paper explores FD properties with relation to the actual and automatic fault-tolerant cloud computing networks, and find a general non-manual analysis method to self-tune the corresponding parameters to satisfy user requirements. Based on this general automatic method, we propose specific and dynamic Self-tuning Failure Detector, called SFD, as a major breakthrough in the existing schemes. We carry out actual and extensive experiments to compare the quality of service performance between the SFD and several other existing FDs. Our experimental results demonstrate that our scheme can automatically adjust SFD control parameters to obtain corresponding services and satisfy user requirements, while maintaining good performance. Such an SFD can be extensively applied to industrial and commercial usage, and it can also significantly benefit the cloud computing networks. Although the cloud computing model is considered to be a very promising internet-based computing platform, it results in a loss of security control over the cloud-hosted assets. This is due to the outsourcing of enterprise IT assets hosted on third-party cloud computing platforms. Moreover, the lack of security constraints in the Service Level Agreements between the cloud providers and consumers results in a loss of trust as well. Obtaining a security certificate such as ISO 27000 or NIST-FISMA would help cloud providers improve consumers trust in their cloud platforms' security. However, such standards are still far from covering the full complexity of the cloud computing model. We introduce a new cloud security management framework based on aligning the FISMA standard to fit with the cloud computing model, enabling cloud providers and consumers to be security certified. Our framework is based on improving collaboration between cloud providers, service providers and service consumers in managing the security of the cloud platform and the hosted services. It is built on top of a number of security standards that assist in automating the security management process. We have developed a proof of concept of our framework using. NET and deployed it on a test bed cloud platform. We evaluated the framework by managing the security of a multi-tenant SaaS application exemplar. Electronic health is vital for enabling improved access to health records, and boosting the quality of the health services provided. In this paper, a framework for an electronic health record system is to be developed for connecting a nation's health care facilities together in a network using cloud computing technology. Cloud computing ensures easy access to health records from anywhere and at any time with easy scalability and prompt on demand availability of resources. A hybrid cloud is to adopted in modeling the system and solutions are proposed for the main challenges faced in any typical electronic health record system. With rapid development of cloud computing, the need for an architecture to follow in developing cloud computing applications is necessary. Existing architectures lack the way cloud applications are developed. They focus on clouds' structure and how to use clouds as a tool in developing cloud computing applications

DOMAIN

YOP

Cloud Computing

2013

A Self-tuning Failure Detection Scheme for Cloud Computing Service

Cloud Computing

2013

CollaborationBased Cloud Computing Security Management Framework

Cloud Computing

2013

Framework of a national level electronic health record system STAR: A proposed architecture for

Cloud Computing

2013

Cloud Computing

2013

cloud computing applications

CDA: A Cloud Dependability Analysis Framework for Characterizing System Dependability in Cloud Computing Infrastructures

Scalable and secure of personal health records in cloud computing using Attributebased encryption.

Data Integrity Proofs in Cloud Storage

PrivacyPreserving Multikeyword Ranked Search over Encrypted Cloud Data

10

PrivacyPreserving Public Auditing for Data Storage Security in Cloud Computing

rather than focusing on how applications themselves are developed using clouds. This paper presents a survey on key cloud computing concepts, definitions, characteristics, development phases, and architectures. Also, it proposes and describes a novel architecture, which aid developers to develop cloud computing applications in a systematic way. It discusses how cloud computing transforms the way applications are developed/delivered and describes the architectural considerations that developers must take when adopting and using cloud computing technology. Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructure. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software failures. Dependability assurance is crucial for building sustainable cloud computing services. Although many techniques have been proposed to analyze and enhance reliability of distributed systems, there is little work on understanding the dependability of cloud computing environments. As virtualization has been an enabling technology for the cloud, it is imperative to investigate the impact of virtualization on the cloud dependability, which is the focus of this work. In this paper, we present a cloud dependability analysis (CDA) framework with mechanisms to characterize failure behavior in cloud computing infrastructures. We design the failuremetric DAGs (directed a cyclic graph) to analyze the correlation of various performance metrics with failure events in virtualized and non-virtualized systems. We study multiple types of failures. By comparing the generated DAGs in the two environments, we gain insight into the impact of virtualization on the cloud dependability. This paper is the first attempt to study this crucial issue. In addition, we exploit the identified metrics for failure detection. Experimental results from an on-campus cloud computing test bed show that our approach can achieve high detection accuracy while using a small number of performance metrics. Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. To assure the patients control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, exible access and efcient user revocation, have remained the most important challenges toward achieving ne-grained, cryptographically enforced data access control. In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semi-trusted servers. To achieve ne-grained and scalable data access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients PHR le. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multi-authority ABE. Our scheme also enables dynamic modication of access policies or le attributes, supports efcient on -demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability and efciency of our proposed scheme Cloud computing has been envisioned as the de-facto solution to the rising storage costs of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at which data is being generated it proves costly for enterprises or individual users to frequently update their hardware. Apart from reduction in storage costs data outsourcing to the cloud also helps in reducing the maintenance. Cloud storage moves the users data to large data centers, which are remotely located, on which user does not have any control. However, this unique feature of the cloud poses many new security challenges which need to be clearly understood and resolved. We provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud. This proof can be agreed upon by both the cloud and the customer and can be incorporated in the Service Level Agreement (SLA). The advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data has to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in cloud, it is crucial for the search service to allow multi-keyword query and provide result similarity ranking to meet the effective data retrieval need. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely differentiate the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Among various multi-keyword semantics, we choose the efficient principle of coordinate matching, i.e., as many matches as possible, to capture the similarity between search query and data documents, and further use inner product similarity to quantitatively formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product computation, and then significantly improve it to meet different privacy requirements in two levels of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication. Cloud computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user. Specifically,

Cloud Computing

2013

Cloud Computing

2012

Cloud Computing

2011

Cloud Computing

2011

Cloud Computing

2011

our contribution in this work can be summarized as the following three aspects: 1) We motivate the public auditing system of data storage security in Cloud Computing and provide a privacy-preserving auditing protocol, i.e., our scheme supports an external auditor to audit users outsourced data in the cloud without learning knowledge on the data content. 2) To the best of our knowledge, our scheme is the first to support scalable and efficient public auditing in the Cloud Computing. In particular, our scheme achieves batch auditing where multiple delegated auditing tasks from different users can be performed simultaneously by the TPA. 3) We prove the security and justify the performance of our proposed schemes through concrete experiments and comparisons with the state-of-the-art. Cloud computing is fundamentally altering expectations for how and when computing, storage an networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine grained control to reduce total cost of ownership while also improving agility. The current approaches to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements. Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive. This expensive and nonreal time paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application management, it is now possible to provide dynamic visibility and control of service management to meet the rapidly growing demand for cloud-based services.

11

Next Generation Cloud Computing Architecture.

Cloud Computing

2009

Vanets
A VANET based Intelligent Road Traffic Signalling System Road Traffic Information System is a key component of the modern intelligent transportation system. Road signaling systems can be made more efficient if real time information from different road sensors and vehicles can be fed in to a wide area controller to optimize the traffic flow, journey time, as well as safety of road users. The VANET architecture provides an excellent framework to develop an advanced road traffic signaling system. In this paper we present a unique VANET based road traffic signaling system that could significantly improve traffic flow, energy efficiency and safety of road users. The VANET based system has been developed using a distributed architecture by incorporating the distributed networking feature. In this paper we first introduce a new Intelligent Road Traffic Signaling System (IRTSS) system based on the VANET architecture. The paper presents some initial simulation results which are obtained by using an OPNET based simulation model. Simulation results show that the proposed architecture can efficiently serve road traffic using the 802.11p based VANET network.

12

Vanet

2013

NETWORKING
A rainfall prediction model using artificial neural network The multilayered artificial neural network with learning by back-propagation algorithm configuration is the most common in use, due to of its ease in training. It is estimated that over 80% of all the neural network projects in development use back-propagation. In back-propagation algorithm, there are two phases in its learning cycle, one to propagate the input patterns through the network and other to adapt the output by changing the weights in the network. The back-propagation-feed forward neural network can be used in many applications such as character recognition, weather and financial prediction, face detection etc. The paper implements one of these applications by building training and testing data sets and finding the number of hidden neurons in these layers for the best performance. In the present research, possibility of predicting average rainfall over Udupi district of Karnataka has been analyzed through artificial neural network models. In formulating artificial neural network based predictive models three layered network has been constructed. The models under study are different in the number of hidden neurons. We address cooperative caching in wireless networks, where the nodes may be mobile and exchange information in a peer-to-peer fashion. We consider both cases of nodes with largeand small-sized caches. For large-sized caches, we devise a strategy where nodes, independent of each other, decide whether to cache some content and for how long. In the case of small-sized caches, we aim to design a content replacement strategy that allows nodes to successfully store newly received information while maintaining the good performance of the content distribution system. Under both conditions, each node takes decisions according to its perception of what nearby users may store in their caches and with the aim of differentiating its own cache content from the other nodes. The result is the creation of content diversity within the nodes neighborhood so that a requesting user likely nds the desired information nearby. We simulate our caching algorithms in different ad hoc network scenarios and compare them with other caching schemes, showing that our solution succeeds in creating the desired content diversity, thus leading to a resource-efcient information access.

13

Network

2013

14

Caching strategies based on information Density estimation in wireless ad Hoc network.

Network

2011

15

Random Cast: An Energy-Efficient Communication Scheme for Mobile Ad Hoc Networks. Information Content-Based Sensor Selection and Transmission Power Adjustment for Collaborative Target Tracking.

In mobile ad hoc networks (MANETs), every node overhears every data transmission occurring in its vicinity and thus, consumes energy unnecessarily. However, since some MANET routing protocols such as Dynamic Source Routing (DSR) collect route information via overhearing, they would suffer if they are used in combination with 802.11 PSM. Allowing no overhearing may critically deteriorate the performance of the underlying routing protocol, while unconditional overhearing may offset the advantage of using PSM.

Network

2009

16

For target tracking applications, wireless sensor nodes provide accurate information since they can be deployed and operated near the phenomenon. These sensing devices have the opportunity of collaboration among themselves to improve the target localization and tracking accuracies. An energy-efficient collaborative target tracking paradigm is developed for wireless sensor networks (WSNs). In addition, a novel approach to energy savings in WSNs is devised in the information-controlled transmission power (ICTP) adjustment, where nodes with more information use higher transmission powers than those that are less informative to share their target state information with the neighboring nodes. The objective of this project is to help farmers by providing information regarding Market price, Weather forecast, Tips, News through SMS which is cost effective. The Server will maintain the database related to agriculture like market price, weather report (state), news related to agriculture, Various reports on government policies , tips like suggestions. Information stored in database is then sent as sms to registered farmer to assist them which may help them to take next step or precautions based on message. Server also responds to farmer request which is received in the form of sms to maximum extend but sms sent from farmer should be in specific format. Database can be updated only by the authentic user authorized by administrator.

Network

2009

17

FARMERS BUDDY.

Network

2008

Network + Security
18 An Elliptic Curve Cryptography Based on Matrix Scrambling Method. A new method on matrix scrambling based on the elliptic curve will be proposed in this paper. The proposed algorithm is based on random function and shifting technique of circular queue. In particular, we first transform the message into points on the elliptic curve as is the embedding system M _PM and then apply the encryption/decryption technique based on matrix scrambling. Our scheme is secure against most of the current attacking mechanisms. In this paper, we propose a distributed key management framework based on group signature to provision privacy in vehicular ad hoc networks (VANETs). Distributed key management is expected to facilitate the revocation of malicious vehicles, maintenance of the system, and heterogeneous security policies, compared with the centralized key management assumed by the existing group signature schemes. In our framework, each road side unit (RSU) acts as the key distributor for the group, where a new issue incurred is that the semi-trust RSUs may be compromised. Thus, we develop security protocols for the scheme which are able to detect compromised RSUs and their colluding malicious vehicles. Moreover, we address the issue of large computation overhead due to the group signature implementation. A practical cooperative message authentication protocol is thus proposed to alleviate the verication burden, where each vehicle just ne eds to verify a small amount of messages. Details of possible attacks and the corresponding solutions are discussed. We further develop a medium access control (MAC) layer analytical model and carry out NS2 simulations to examine the key distribution delay and missed detection ratio of malicious messages, with the proposed key management framework being implemented over 802.11 based VANETs. The rapid growth of the computers that are interconnected, the crime rate has also increased and the ways to mitigate those crimes has become the important problem now. In the entire globe, organizations, higher learning institutions and governments are completely dependent on the computer networks which plays a major role in their daily operations. Hence the necessity for protecting those networked systems has also increased. Cyber crimes like compromised server, phishing and sabotage of privacy information has increased in the recent past. It need not be a massive intrusion, instead a single intrusion can result in loss of highly privileged and important data. Intusion behaviour can be classified based on different attack types. Smart intruders will not attack using a single attack, instead, they will perform the attack by combining few different attack types to deceive the detection system at the gateway. As a countermeasure, computational intelligence can be applied to the intrusion detection systems to realize the attacks, alert the administrator about the form and severity, and also to take any predetermined or adaptive measures dissuade the intrusion. Network + Security 2012

19

A Distributed Key Management Framework with Cooperative Message Authentication in VANETs

Network + Security

2011

20

Hybrid Intrusion Detection Systems (HIDS) using Fuzzy Logic

Network + Security

2011

21

A Large-Scale Hidden SemiMarkov Model for Anomaly Detection on User Browsing Behaviors

There are many solution based methods created against distributed denial of service (DDoS) attacks are focused on the Transmission Control Protocol and Internet Protocol layers as a substitute of the high layer. The DDoS attack makes an attempt to make a computer resource unavailable to its respective users. DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service and actually they are not suitable for handling the new type of attack which is based on the application layer. In this project, we establish a new system to achieve early attack discovery and filtering for the application-layer-based DDoS attack. An extended hidden semi-Markov model is proposed to describe the browsing habits of web searchers. A forward algorithm is derived for the online implementation of the model based on the M-algorithm in order to reduce the computational amount introduced by the models large state space. Entropy of the users HTTP request sequence accurate to the replica is used as a principle to measure the users normality. Finally, experiments are conducted to validate our model and algorithm. Online exam is field that is very popular and made many security assurances. Then also it fails to control cheating. Online exams have not been widely adopted well, but online education is adopted and using allover the world without any security issues. An online exam is defined in this project as one that takes place over the insecure Internet, and where no proctor is in the same location as the examinees. This project proposes an enhanced secure filled online exam management environment mediated by group cryptography techniques using remote monitoring and control of ports and input. The target domain of this project is that of online exams for any subjects contests in any level of study, as well as exams in online university courses with students in various remote locations. This project proposes a easy solution to the issue of security and cheating for online exams. This solution uses an enhanced Security Control system in the Online Exam (SeCOnE) which is based on group cryptography Now a day the usage of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a Hidden Markov Model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature. Multiple-path source routing protocols allow a data source node to distribute the total traffic among available paths. In this Project, we consider the problem of jamming-aware source routing in which the source node performs traffic allocation based on empirical jamming statistics at individual network nodes. We formulate this traffic allocation as a lossy network flow optimization problem using portfolio selection theory from financial statistics. We show that in multi-source networks, this centralized optimization problem can be solved using a distributed algorithm based on decomposition in network utility maximization (NUM). We demonstrate the network's ability to estimate the impact of jamming and incorporate these estimates into the traffic allocation problem. Finally, we simulate the achievable throughput using our proposed traffic allocation method in several scenarios. A data distributor has given sensitive data to a set of supposedly trusted agents (third parties). Some of the data are leaked and found in an unauthorized place (e.g., on the web or somebodys laptop). The distributor must assess the likelihood that the leaked data came from one or more agents, as opposed to having been independently gathered by other means. We propose data allocation strategies (across the agents) that improve the probability of identifying leakages. These methods do not rely on alterations of the released data (e.g., watermarks). In some cases, we can also inject realistic but fake data records to further improve our chances of detecting leakage and identifying the guilty party.

Network + Security

2009

22

Enhanced Security for Online Exams Using Group Cryptography.

Network + Security

2009

23

Credit Card Fraud Detection Using Hidden Markov Model

Network + Security

2008

24

Jamming-Aware Traffic Allocation for Multiple-Path Routing Using Portfolio Selection

Network + Security

2008

25

Data Leakage Detection

Network + Security

2008

WIRELESS SENSOR NETWORK


An Ant Colony Optimization Approach for Maximizing the Lifetime of Heterogeneous Wireless Sensor Networks Maximizing the lifetime of wireless sensor networks (WSNs) is a challenging problem. Although some methods exist to address the problem in homogeneous WSNs, research on this problem in heterogeneous WSNs have progressed at a slow pace. Inspired by the promising performance of ant colony optimization (ACO) to solve combinatorial problems, this paper proposes an ACO-based approach that can maximize the lifetime of heterogeneous WSNs. The methodology is based on finding the maximum number of disjoint connected covers that satisfy both sensing coverage and network connectivity. A construction graph is designed with each vertex denoting the assignment of a device in a subset. Based on pheromone and heuristic information, the ants seek an optimal path on the construction graph to maximize the number of connected covers. The pheromone serves as a metaphor for the search experiences in building connected covers. The heuristic information is used to reflect the desirability of device assignments. A local search procedure is designed to further improve the search efficiency. The proposed approach has been applied to a variety of heterogeneous WSNs. The results show that the approach is effective and efficient in finding high-quality solutions for maximizing the lifetime of heterogeneous WSNs

26

Wireless sensor network

2013

27

Maximizing Lifetime Vector in Wireless Sensor Networks

28

Security in Wireless Sensor Networks with Public Key Techniques.

Maximizing the lifetime of a sensor network has been a subject of intensive study. However, much prior work defines the network lifetime as the time before the first data-generating sensor in the network runs out of energy or is not reachable to the sink due to network partition. The problem is that even though one sensor is out of operation, the rest of the network may well remain operational, with other sensors generating useful data and delivering those data to the sink. Hence, instead of just maximizing the time before the first sensor is out of operation, we should maximize the lifetime vector of the network, consisting of the lifetimes of all sensors, sorted in ascending order. For this problem, there exists only a centralized algorithm that solves a series of linear programming problems with high-order complexities. This paper proposes a fully distributed algorithm that runs iteratively. Each iteration produces a lifetime vector that is better than the vector produced by the previous iteration. Instead of giving the optimal result in one shot after lengthy computation, the proposed distributed algorithm has a result at any time, and the more time spent gives the better result. We show that when the algorithm stabilizes, its result produces the maximum lifetime vector. Furthermore, simulations demonstrate that the algorithm is able to converge rapidly toward the maximum lifetime vector with low overhead. Wireless sensor networks (WSNs) have attracted a lot of researchers due to their usage in critical applications. WSN have limitations on computational capacity, battery etc which provides scope for challenging problems. Applications of WSN are drastically growing from indoor deployment to critical outdoor deployment. WSN are distributed and deployed in an un attend environment, due to this WSN are vulnerable to numerous security threats. The results are not completely trustable due to their deployment in outside and uncontrolled environments. In this current paper, we fundamentally focused on the security issue of WSNs and proposed a protocol based on public key cryptography for external agent authentication and session key establishment. The proposed protocol is efficient and secure in compared to other public key based protocols in WSNs.

Wireless sensor network

2013

Wireless sensor network

2012

29

On maximizing the Lifetime of WSN using virtual backbone scheduling

Wireless Sensor Networks (WSNs) are key for various applications that involve long-term and low-cost monitoring and actuating. In these applications, sensor nodes use batteries as the sole energy source. Therefore, energy efficiency becomes critical. We observe that many WSN applications require redundant sensor nodes to achieve fault tolerance and Quality of Service (QoS) of the sensing. However, the same redundancy may not be necessary for multihop communication because of the light traffic load and the stable wireless links. In this paper, we present a novel sleep-scheduling technique called Virtual Backbone Scheduling (VBS). VBS is designed for WSNs has redundant sensor nodes. VBS forms multiple overlapped backbones which work alternatively to prolong the network lifetime. In VBS, traffic is only forwarded by backbone sensor nodes, and the rest of the sensor nodes turn off their radios to save energy. The rotation of multiple backbones makes sure that the energy consumption of all sensor nodes is balanced, which fully utilizes the energy and achieves a longer network lifetime compared to the existing techniques. The scheduling problem of VBS is formulated as the Maximum Lifetime Backbone Scheduling (MLBS) problem. Since the MLBS problem is NP-hard, we propose approximation algorithms based on the Schedule Transition Graph (STG) and Virtual Scheduling Graph (VSG). We also present an Iterative Local Replacement (ILR) scheme as a distributed implementation. Theoretical analyses and simulation studies verify that VBS is superior to the existing techniques. A wireless sensor network is the network consisting of numerous small sensor nodes with sensing, processing and wireless communication capabilities. Many routing protocols are suggested for wireless sensor networks due to the several limited resources of a sensor node such as its limited CPU, memory size, and battery. They can be divided into flat and hierarchical routing protocols. The hierarchical routing protocol uses the clustering scheme and shows better performance than flat routing protocols. However, there is an assumption that sensor nodes can communicate with the base station by a one-hop routing in the hierarchical routing protocol. However, if the network size become larger, the hierarchical routing protocol is unsuitable because a long distance between a clusterhead and the base station can cause some communication problems. In this paper, we propose the clusterhead chaining scheme to solve this problem. Our scheme is suitable for vast wireless sensor networks and it was found from the simulation result that the proposed scheme shows better performance than the general hierarchical routing protocol. Due to the unattended nature of wireless sensor networks, an adversary can capture and compromise sensor nodes, generate replicas of those nodes, and mount a variety of attacks with the replicas he injects into the network. These attacks are dangerous because they allow the attacker to leverage the compromise of a few nodes to exert control over much of the network. Several replica node detection schemes in the literature have been proposed to defend against these attacks in static sensor networks. These approaches rely on fixed sensor locations and hence do not work in mobile sensor networks, where sensors are expected to move. In this work, we propose a fast and effective mobile replica node detection scheme using the Sequential Probability Ratio Test. To the best of our knowledge, this is the first work to tackle the problem of replica node attacks in mobile sensor networks. We show analytically and through simulation experiments that our schemes achieve effective and robust replica detection capability with overheads.

Wireless sensor network

2012

30

The Cluster head Chaining Scheme considering scalability of the WSN.

Wireless sensor network

2012

31

Fast Detection of Replica Node Attacks in Mobile Sensor Networks Using Sequential Analysis. -

Wireless sensor network

2011

WEB MINING

32

Association ruleextracting knowledge using Market Basket Analysis.

33

A Collaborative Decentralized Approach to Web Search

34

Adaptive Provisioning of Human Expertise in Serviceoriented Systems

Decision making and understanding the behavior of the customer has become vital and challenging problem for organizations to sustain their position in the competitive markets. Technological innovations have paved breakthrough in faster processing of queries and sub-second response time. Data mining tools have become surest weapon for analyzing huge amount of data and breakthrough in making correct decisions. The objective of this paper is to analyze the huge amount of data thereby exploiting the consumer behavior and make the correct decision leading to competitive edge over rivals. Experimental analysis has been done employing association rules using Market Basket Analysis to prove its worth over the conventional methodologies Most explanations of the user behavior while interacting with the web are based on a top-down approach, where the entire Web, viewed as a vast collection of pages and interconnection links, is used to predict how the users interact with it. A prominent example of this approach is the random-surfer model, the core ingredient behind Googles PageRank. This model exploits the linking structure of the Web to estimate the percentage of web surfers viewing any given page. Contrary to the top-down approach, a bottom-up approach starts from the user and incrementally builds the dynamics of the web as the result of the users interaction with it. The second approach has not being widely investigated, although there are numerous advantages over the top-down approach regarding (at least) personalization and decentralization of the required infrastructure for web tools. In this paper, we propose a bottom-up approach to study the web dynamics based on webrelated data browsed, collected, tagged, and semi-organized by end users. Our approach has been materialized into a hybrid bottom-up search engine that produces search results based solely on user provided web-related data and their sharing among users. We conduct an extensive experimental study to demonstrate the qualitative and quantitative characteristics of user generated web-related data, their strength, and weaknesses as well as to compare the search results of our bottom-up search engine with those of a traditional one. Our study shows that a bottom-up search engine starts from a core consisting of the most interesting part of the Web (according to user opinions) and incrementally (and measurably) improves its ranking, coverage, and accuracy. Finally, we discuss how our approach can be integrated with PageRank, resulting in a new page ranking algorithm that can uniquely combine link analysis with users preferences. Web-based collaborations have become essential in todays business environments. Due to the availability of various SOA frameworks, Web services emerged as the de facto technology to realize flexible compositions of services. While most existing work focuses on the discovery and composition of software based services, we highlight concepts for a people-centric Web. Knowledge-intensive environments clearly demand for provisioning of human expertise along with sharing of computing resources or business data through software-based services. To address these challenges, we introduce an adaptive approach allowing humans to provide their expertise through services using SOA standards, such as WSDL and SOAP. The seamless integration of humans in the SOA loop triggers numerous social implications, such as evolving expertise and drifting interests of human service providers. Here we propose a framework that is based on interaction monitoring techniques enabling adaptations in SOA-based socio-technical systems.

Web mining

2012

Web mining

2012

Web mining

2008

You might also like