You are on page 1of 75

CHAPTER 1

INTRODUCTION

The term "cloud" originates from the world of telecommunications when providers began
using virtual private network (VPN) services for data communications [1]. Cloud computing deals
with computation, software, data access and storage services that may not require end-user
knowledge of the physical location and the configuration of the system that is delivering the
services. Cloud computing is a recent trend in IT that moves computing and data away from
desktop and portable PCs into large data centers [2]. The definition of cloud computing provided
by National Institute of Standards and Technology (NIST) says that: "Cloud computing is a model
for enabling convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage applications and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. [3]"
With the large scale proliferation of the internet around the world, applications can now be
delivered as services over the internet. As a result this reduces the overall cost.

The main goal of cloud computing is to make a better use of distributed resources,
combine them to achieve higher throughput and be able to solve large scale computation
problems. Cloud computing deals with virtualization, scalability, interoperability, quality of
service and the delivery models of the cloud, namely private, public and hybrid.

1
1.1.1 CLOUD COMPUTING ARCHITECTURE:

Cloud computing architecture is the systems design of the software systems contained in
the delivery of cloud computing which usually involves multiple cloud components collaborating
with each other over a loose connector mechanism such as a messaging queue.

Cloud computing architecture in fig 1.1 refers to the components and subcomponents
essential for cloud computing. These collective components usually consist of a frontend
platform (thick or thin client or mobile device), backend platforms (servers, storage), cloud based
delivery system and a network (Internet, Intranet, Inter cloud).

The front and back ends are connected through a network, normally via Internet by
means of a delivery system. The diagram below illustrates the graphical description of cloud
computing architecture:

Fig 1.1 Cloud architecture

2
Cloud Client Platforms:

Cloud computing architectures comprise of front-end platforms called cloud clients or


simply clients. These clients contain servers, thick, thin and ultra-thin clients, laptops and tablets
and various mobile devices. The interaction between client platforms and the cloud data storage
is via an application (middleware), a web browser or via a virtual session.

The ultra-thin client trains the network to collect the vital configuration files which then
tell it where its OS binaries are stored. The entire ultra-thin client device operates via the
network which forms a single point of failure. In other words, if the network goes down, the
device is considered useless.

Cloud Storage:

Cloud storage is an internet or online storage where data is stored and available to
numerous clients. Cloud storage is commonly installed in the following configurations: public
cloud, private cloud, community cloud or some permutation of the three also known as hybrid
cloud.
It is crucial for cloud storage to be responsive, scalable, flexible, capable to provide
multi-tenancy and secure in order to be effective.

1.1.2 Cloud Based Delivery:

SOFTWARE-AS-A-SERVICE (SaaS):

The software-as-a-service (SaaS) service-model incorporates the cloud provider installing


and maintaining software within the cloud and users operating the software from their cloud
clients over the Internet (or Intranet). The users’ client devices do not need installation of any
application-specific software, meaning all cloud applications run on the server in the cloud. SaaS
is scalable and server system admin may load the applications on a number of servers. For the
user or enterprise, SaaS is normally charged as a monthly or annual fee.
Software as a service provides the matching installed applications in the traditional (non-cloud
computing) provision of applications.

3
Software as a service has four typical methods:

1. Single instance
2. Multi instance
3. Multi-tenant
4. Flex tenancy

DEVELOPMENT-AS-A-SERVICE (DaaS):

Development as a service is usually web-based, collectively pooled development tools.


This is comparable to locally installed development tools in the traditional (non-cloud
computing) provisioning of development tools.

PLATFORM-AS-A-SERVICE (PaaS):

Platform as a service is cloud computing service which offers the users with application
platforms and databases as a service. This is comparable to middleware in the traditional (non-
cloud computing) provisioning of application platforms and databases.

INFRASTRUCTURE-AS-A-SERVICE (IaaS):

Infrastructure as a service is virtualizing all the physical hardware (all servers, networks,
storage and system management). This is comparable to infrastructure and hardware in the
traditional (non-cloud computing) system operating within the cloud. Companies pay a fee
(monthly or annually) to run virtual servers, networks and storage from the cloud which will
diminish the requirement for a data centre, environment setting and maintaining hardware at the
local level.

4
Cloud Networking
In general, the cloud network layer must offer:

 High bandwidth (low latency) – Ensuring users have uninterrupted access to data
and applications.

 Agile network – On-demand availability of resources involves the capability to
move rapidly and efficiently among servers and perhaps even within clouds.

 Network security – Security is always vital but when you are working with multi-
tenancy, it becomes much more crucial because you’re now dealing with isolating
multiple customers from each other.

1.1.3 CLOUD COMPUTING APPLICATIONS:

Cloud Computing are used for the following applications:

 Online File storage



 Photo editing software

 Digital video software

 Twitter-related applications

 Creating image-album

 Web application for antivirus

 Word processing application

 Spreadsheets

 Presentation software

 Finding a way on the map

 E-commerce software

 Miscellaneous applications

5
CHARACTERISTICS OF CLOUD COMPUTING
On-demand Self-Service:

A consumer can unilaterally provision computing capabilities, such as server time and
network storage, as needed automatically without requiring human interaction with each service
provider.

Broad Network Access:

Capabilities are available over the network and accessed through standard mechanisms
that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets,
laptops, and workstations).

Resource Pooling:

The provider’s computing resources are pooled to serve multiple consumers using a
multi-tenant model, with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand. There is a sense of location independence in that the
customer generally has no control or knowledge over the exact location of the provided resources
but may be able to specify location at a higher level of abstraction (e.g., country, state, or
datacenter). Examples of resources include storage, processing, memory, and network
bandwidth.

Rapid elasticity:

Capabilities can be elastically provisioned and released, in some cases automatically, to


scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities
available for provisioning often appear to be unlimited and can be appropriated in any quanti ty
at any time.

Measured Service:

Cloud systems automatically control and optimize resource use by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled,
and reported, providing transparency for both the provider and consumer of the utilized service.

6
1.1.4 DRIVE HQ:

Drive Headquarters, Inc. (also called DriveHQ) is a cloud IT service provider.

Automatic data backup: not folder synchronization, safer than sync. DriveHQ offers local-to-
cloud and cloud-to-cloud data backup services. Compared with other online backup software /
services, DriveHQ Online Backup has quite a few advantages: It is extremely easy-to-use.

Features of Drive HQ:

A Complete Enterprise Solution:

DriveHQ's cloud IT solution for enterprises offers some of the most cutting-edge storage,
collaboration, and security technologies, allowing your IT team to quickly deploy a trusted and
scalable cloud environment within your company's infrastructure. Built with the end user as the
primary focus, DriveHQ turns advanced technologies into a simple, flexible, and secure solution
that your whole team can enjoy

Cloud File Backup:

Whether you store files locally or in the cloud, your data needs to be protected from
human errors, natural disasters and malicious activities. Cloud storage and folder
synchronization are not the same as cloud backup. DriveHQ's Online Backup and Cloud-to-cloud
Backup are true enterprise-level backup service that can automatically back up local data on any
number of computers and selected folders in your Cloud account.

Cloud Drive Mapping:

o Absolutely the Easiest Cloud Storage & Cloud File Server Solution!
o A mapped cloud drive works just like a local drive. It is easier and more efficient
than folder synchronization. You can immediately use your WebDAV cloud drive
without synchronizing files and taking up a lot of local disk space on all computers.
o Try it once and you will never want to use other cloud storage service again!

7
o Drive Mapping is the easiest way of accessing Cloud storage. When you can
directly access Cloud storage like a local drive, you never want to use any
browser based or folder sync based cloud storage again.

FTP Hosting

Complete FTP/SFTP/FTPS Server Hosting Solution:

o DriveHQ is one of the largest FTP Hosting service providers. Our Cloud FTP
Server is a full-feature FTP Server Hosting solution, allowing you to easily
replace your in-house FTP servers with very few changes; moreover, it is
seamlessly integrated with our Cloud IT service. You can create sub-accounts and
sub-groups, manage user permissions, configure default folders, allocate storage
space and download bytes, etc.

o DriveHQ Cloud FTP Server is compatible with all FTP (SFTP/FTPS) Client
software. You can also map it as a Cloud Drive, or use DriveHQ's native
FileManager software, both are easier than regular FTP.

CameraFTP:

o CameraFTP is a subsidiary of DriveHQ. It offers the leading cloud surveillance


and storage service at a very low price.

o You can use any IP camera, webcam, smart phone or tablet as a security camera. The
recorded data is uploaded to CameraFTP's secure cloud storage. You can view
footage at any time from CameraFTP.com website or CameraFTP Viewer apps.
Pricing:

After researching the company and it’s competitors, I found that the pricing was
ridiculously low compared to other services such as Dropbox, Google, Carbonite, and OneDrive.
However my pricing was for a business client who required a large amount of storage space, and
certain features, so this may not be true in all situations. Depending on how you plan on using
cloud services, you will need to compare each company for pricing, security, functionality and
features.

Set Up:

My client required a large amount of storage, security, WebDav drive mapping, FTP
access and continuous daily backups. So in their case, I took advantage of basically most of the
features of DriveHQ. Setting this system up was relatively easy, and customer service support
was awesome. So, let’s take a look at some of the features that DriveHQ includes in their
“Enterprise Services”. As I will only cover some my favorite features, you can visit DriveHQ for
a full list of options.

Online BackUp Tool:

My favorite feature that is provided by the company is their Online Backup Tool. With
this option, you can set up automatic backups to run at basically anytime you see fit. In my
client’s case, we set up continuous backups at both offices. This way, if a file was lost or deleted,
even shortly after making the file, it could be retrieved quickly. Also, the software is capable of
file versioning, in the case that you need an older version of the file you are looking for.

Security:

DriveHQ provides a certain level of security that you may not be able to provide with an
onsite backup. To quote their website, “DriveHQ’s Online Backup client software automatically
backs up your data to our secure and reliable servers. Our data centers have 24/7 onsite security
and surveillance, and uses multiple layers of redundancy to ensure the best security for your
content.”

9
Afterthoughts:

Overall, I would recommend this company to any mid-size to large business in need of
secure data storage. On a personal level, there may be other options, but I can say with
confidence that DriveHQ is secure, simple, and has great customer service.

1.2 END TO END ENCRYPTION IN EDGE COMPUTING:

Cloud and mobile computing has given room to unprecedented level of access points into
corporate as well as individual data leaving one to rethink how to protect such data. Whether
three digit government organizations support it or not, many practitioners are considering end-to-
end (E2E) encryption as an important security measure to protect the crown jewels of
organizations and individuals, data. While it solves the problem of unauthorized access to some
degree, it is still in its infancy and has many limitations and pitfalls that practitioners should
consider before embracing it.

E2E encryption ensures that only the users who interact in a system have access to
plaintext data, while all the other entities in the system only see encrypted data. For example, in a
client-server system, clients are the users, and thus only clients gain access to plaintext data
while the server only works with the encrypted data. It should be noted that E2E encryption is
different from data-at-rest encryption and data-in-motion encryption. Data-at-rest encryption
preserves the confidentiality of data when it is not in use. For example, transparent encryption in
Oracle, MS SQL or IBM DB2 databases falls into this category. Data-in-motion encryption
preserves the confidentiality of data on the wire. In other words, it provides transport level
confidentiality. TLS (Transport Level Security) over HTTP (Hyper Text Transfer Protocol) is the
most widely used data-in-motion encryption in the Internet. Neither of them preserve the
confidentiality of the data at end points when it is being used to perform some operations.

The way E2E encryption is implemented in a system is dependent on how entities


communicate with one another in the system. Thus, based on the communication patterns, we
classify these systems into three categories: client-server, messaging, and publish-subscribe.
Client-server is the most widely used communication pattern in the Internet.

10
For example, a web browser accessing a web page from a web server follows this
pattern. The goal of E2E encryption in client-server systems is to preserve the confidentiality of
data from servers.

In a messaging system, peers usually communicate with each other with the coordination
from a centralized server. This communication could be either online (e.g., Skype) or offline
(e.g., email). The goal of E2E encryption in messaging systems is preserve the confidentiality of
the data being exchanged between two or more peers from the centralized infrastructure
facilitating the messaging.

Publish-subscribe systems allow users to register their interests by way of subscriptions


with a third-party broker such that when a publisher publishes documents that match users’
interests, the broker routes the matching documents to the matching users. The goal of E2E
encryption in publish-subscribe systems is to preserve the confidentiality of both published data
and user subscriptions from third-party brokers.

E2E encryption mitigates the risk of exposing sensitive data due to server compromise.
Unfortunately, E2E encryption comes with a cost, namely broken functionality. There is a new
class of work that tries to strike a balance between these two conflicting goals. We observe that
there are mainly three approaches used in this regard: homomorphic encryption techniques
secure multiparty computation (SMC) and property preserving encryption techniques . These
techniques provide varying degree of security under different threat models.

It is timely to analyze the security provided by E2E encrypted systems. Recently, Grubbs
et al. [37] analyzes the security of web applications built on top of encrypted data. There is no
study analyzing the security of E2E encrypted systems in general. We believe that such a study
helps not only to replicate success in one type of E2E system into another but also build systems
that are secure against real-world security threats.

11
1.2.1 ENCRYPTION TECHNIQUES USED IN E2E ENCRYPTED SYSTEMS:

In this section, we critically analyze the security of various encryption techniques used in
E2E encrypted systems. Classical encryption schemes such as AES and RSA are commonly used
to encrypt data in today’s systems. While they preserve the confidentiality, by their very
definition, cipher texts render unusable for any meaningful operations on them. The issue then
boils town to how to allow one to perform meaningful operations without revealing underlying
plaintext. Two relatively new classes of encryption schemes try to solve this conundrum. The
first class of schemes are in general called Property Preserving Encryption (PPE) schemes while
the second called Homomorphic Encryption (HE) schemes.

The researchers in the PPE camp asks the question of how to encrypt in a way that preserves
some proprieties of underlying plaintext. The researchers in the HE camp, instead of trying to
reveal properties, ask the question of how to perform computations over encrypted data without
decrypting data. These two classes have their own pros and cons that one should consider when
building practical systems utilizing these schemes.

12
CHAPTER 2
Literature Survey :
2.1 Edge computing: Vision and challenges
Authors: Weisong Shi, Fellow, IEEE, Jie Cao, Student Member, IEEE, Quan Zhang,
Student Member, IEEE, Youhuizi Li, and Lanyu Xu

Abstract :
The proliferation of Internet of Things (IoT) and the success of rich cloud services have
pushed the horizon of a new computing paradigm, edge computing, which calls for processing
the data at the edge of the network. Edge computing has the potential to address the concerns of
response time requirement, battery life constraint, bandwidth cost saving, as well as data safety
and privacy. In this paper, we introduce the definition of edge computing, followed by several
case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge
to materialize the concept of edge computing. Finally, we present several challenges and
opportunities in the field of edge computing, and hope this paper will gain attention from the
community and inspire more research in this direction.
2.2 A data protection model for fog computing
Authors: Thanh Dat Dang, Doan Hoang
Abstract:
Cloud computing has established itself as an alternative IT infrastructure and service
model. However, as with all logically centralized resource and service provisioning
infrastructures, cloud does not handle well local issues involving a large number of networked
elements (IoTs) and it is not responsive enough for many applications that require immediate
attention of a local controller. Fog computing preserves many benefits of cloud computing and it
is also in a good position to address these local and performance issues because its resources and
specific services are virtualized and located at the edge of the customer premise. However, data
security is a critical challenge in fog computing especially when fog nodes and their data move
frequently in its environment. This paper addresses the data protection and the performance
issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among
fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control
(FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to
handle changes of users and fog devices' locations. The implementation results demonstrate the
feasibility and the efficiency of our proposed framework.

13
2.3 Fog Computing: Survey of Trends, Architectures, Requirements, and Research
Directions
Authors: Ranesh Kumar Naha1, Saurabh Garg 1, Dimitrios Georgakopoulos2, Prem Prakash
Jayaraman2, Longxiang Gao 3, Yong Xiang3

Abstract:

Emerging technologies such as the Internet of Things (IoT) require latency-aware


computation for real-time application processing. In IoT environments, connected things generate a
huge amount of data, which are generally referred to as big data. Data generated from IoT devices are
generally processed in a cloud infrastructure because of the on-demand services and scalability
features of the cloud computing paradigm. However, processing IoT application requests on the
cloud exclusively is not an efficient solution for some IoT applications, especially time-sensitive
ones. To address this issue, Fog computing, which resides in between cloud and IoT devices, was
proposed. In general, in the Fog computing environment, IoT devices are connected to Fog devices.
These Fog devices are located in close proximity to users and are responsible for intermediate
computation and storage. One of the key challenges in running IoT applications in a Fog computing
environment are resource allocation and task scheduling. Fog computing research is still in its
infancy, and taxonomy-based investigation into the requirements of Fog infrastructure, platform, and
applications mapped to current research is still required. This survey will help the industry and
research community synthesize and identify the requirements for Fog computing. This paper starts
with an overview of Fog computing in which the definition of Fog computing, research trends, and
the technical differences between Fog and cloud are reviewed. Then, we investigate numerous
proposed Fog computing architectures and describe the components of these architectures in detail.
From this, the role of each component will be defined, which will help in the deployment of Fog
computing. Next, a taxonomy of Fog computing is proposed by considering the requirements of the
Fog computing paradigm. We also discuss existing research works and gaps in resource allocation
and scheduling, fault tolerance, simulation tools, and Fog-based microservices. Finally, by
addressing the limitations of current research works, we present some open issues, which will
determine the future research direction for the Fog computing paradigm.

14
2.4 Fuzzy identity-based encryption
Authors: Amit Sahai, Brent Waters
Abstract :

We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy
Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy
IBE scheme allows for a private key for an identity, ω, to decrypt a cipher text encrypted with an
identity, ω 0 , if and only if the identities ω and ω 0 are close to each other as measured by the “set
overlap” distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric
inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for
the use of biometric identities, which inherently will have some noise each time they are sampled.
Additionally, we show that Fuzzy-IBE can be used for a type of application that we term “attribute-
based encryption”. In this paper we present two constructions of Fuzzy IBE schemes. Our
constructions can be viewed as an Identity-Based Encryption of a message under several attributes
that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion
attacks. Additionally, our basic construction does not use random oracles. We prove the security of
our schemes under the Selective-ID security model.

2.5 Attribute-based encryption with verifiable outsourced decryption


Authors: Junzuo Lai, Robert H. Deng, Chaowen Guan, and Jian Weng
Abstract:
Attribute-based encryption (ABE) is a public-key-based one-to-many encryption that
allows users to encrypt and decrypt data based on user attributes. A promising application of
ABE is flexible access control of encrypted data stored in the cloud, using access polices and
ascribed attributes associated with private keys and cipher texts. One of the main efficiency
drawbacks of the existing ABE schemes is that decryption involves expensive pairing operations
and the number of such operations grows with the complexity of the access policy. Recently,
Green et al. proposed an ABE system with outsourced decryption that largely eliminates the
decryption overhead for users.

15
In such a system, a user provides an untrusted server, say a cloud service provider, with a
transformation key that allows the cloud to translate any ABE cipher text satisfied by that user's
attributes or access policy into a simple cipher text, and it only incurs a small computational
overhead for the user to recover the plaintext from the transformed cipher text. Security of an
ABE system with outsourced decryption ensures that an adversary (including a malicious cloud)
will not be able to learn anything about the encrypted message; however, it does not guarantee
the correctness of the transformation done by the cloud. In this paper, we consider a new
requirement of ABE with outsourced decryption: verifiability. Informally, verifiability
guarantees that a user can efficiently check if the transformation is done correctly. We give the
formal model of ABE with verifiable outsourced decryption and propose a concrete scheme. We
prove that our new scheme is both secure and verifiable, without relying on random oracles.
Finally, we show an implementation of our scheme and result of performance measurements,
which indicates a significant reduction on computing resources imposed on users.

2.6 Attribute-Based Signcryption Scheme with Non-monotonic Access Structure


Authors: Yiliang Han, Wanyi Lu, Xiaoyuan Yang

Abstract:

The encrypted data presented by the traditional public key cryptosystem only provides
the access control of nothing or all, that is, one can either decrypt the entire plaintext or nothing
other than its length. Attribute-Based Signcryption provides not only the combined
confidentiality and unforgeability, but also the chance that the participants access the cipher texts
by their attributes instead of identities. The paper proposed a new attribute based signcryption
scheme, which has the non-monotonic access structure and constant-size cipher text. The
proposal supports the AND, OR, and NEG gate, and provides the more flexible access control
and express ability. The security in confidentiality and unforgeability, are evaluated in the
standard model under the q-DBDHE assumption. By employing the zero inner-multiplication,
the length of the cipher text has no related with the number of attributes and lies in 5|G1|+|m|
constantly. In additionally, the scheme could be verified publicly. It could be used in massive
social network and makes the chance that sharing the data between communities.

16
Title Concept Used
cloud offloading to smart home and city, as
Edge Computing: Vision and Challenges well as collaborative edge to materialize the
concept of edge computing

This paper addresses the data protection and


the performance issues
1. proposing a Region-Based Trust-
Aware (RBTA) model for trust
translation among fog nodes of
A data protection model for fog regions
computing 2. introducing a Fog-based Privacy-
aware Role Based Access Control
(FPRBAC) for access control at fog
nodes
3. developing a mobility management
service to handle changes of users and
fog devices' locations
This paper addressed latency aware
computation in IoT devices.
Fog Computing: Survey of Trends, It focused difference between cloud and fog
Architectures, Requirements, and computing. It defines a solution for resource
Research Directions allocation and scheduling, fault tolerance,
simulation tools, and Fog-based
microservices.
1. This paper proposed a new attribute
Attribute-Based Signcryption Scheme based signcryption scheme, which
with Non-monotonic Access Structure has the non-monotonic access
structure and constant-size cipher
text.
2. The proposal supports the AND, OR,
and NEG gate, and provides the more

17
flexible access control and express
ability.
Attribute-Based Signcryption Scheme 3. The security in confidentiality and
with Non-monotonic Access Structure unforgeability, are evaluated in the
standard model under the q-DBDHE
assumption.
A user provides an untrusted server, say a
cloud service provider, with a transformation
Attribute-based encryption with key that allows the cloud to translate any
verifiable outsourced decryption ABE ciphertext satisfied by that user's
attributes or access policy into a simple
ciphertext.

18
CHAPTER 3
SYSTEM STUDY
3.1 Existing System :

In Existing system proposed proxy-aided ciphertext-policy ABE (PA-CPABE), which


outsources the majority of the decryption computations to edge devices. They gave a generic
construction for PA-CPABE via which a PA-CPABE scheme could be converted from a CP-
ABE scheme, and then apply a concrete CP-ABE scheme which satisfies certain properties into
the generic construction of PA-CPABE to obtain a concrete PA-CPABE scheme.

They put forth a primitive called proxy-aided cipher text policy attribute-based encryption
(PA-CPABE) to outsource the decryption workloads of ABE cipher texts to an untrusted proxy (i.e.,
an edge device) but without requiring any secure channels for the key distribution

3.1.1 Disadvantages :

 There is no assurance respective user or device only accessing the local cloud data more
over we could not use any authority to track the accessing records.

 It consumes high latency which makes IoT devices inefficient with its performance.

3.2 Proposed System :

In proposed System planned to implement end to end encryption. end-to-end encryption has
recently received an increasing attention as a way to protect against such threats. End-to-end (E2E)
encryption preserve the confidentiality of data on the wire as well as from service providers by
performing encryption/decryption at clients keeping the keys strictly within client devices.

Advantages :

 E2E encryption ensures that only the users who interact in a system have access to
original data.

 Devices can access data with low latency.

19
CHAPTER 4

SYSTEM REQUIREMENTS:

4.1 HARDWARE REQUIREMENTS:


System : Pentium IV 2.4 GHz.

Hard Disk : 200 GB.

Monitor : 15 VGA Colour.

Mouse : Logitech.

Ram : 1024 Mb.

4.2 SOFTWARE REQUIREMENTS:


Operating system : Windows XP/7.

Coding Language : JAVA/J2EE

IDE : Netbeans 7.3

Database : MYSQL.

CloudService : DriveHQ

20
SOFTWARE DESCRIPTION:

LANGUAGE SPECIFICATION:

JAVA

Java is a computer programming language that is concurrent, class-based, object-


oriented, and specifically designed to have as few implementation dependencies as possible. It is
intended to let application developers "write once, run anywhere" (WORA), meaning that code
that runs on one platform does not need to be recompiled to run on another. Java applications are
typically compiled to bytecode (class file) that can run on any Java virtual machine(JVM)
regardless of computer architecture. Java is, as of 2014, one of the most popular programming
languages in use, particularly for client-server web applications, with a reported 9 million
developers. Java was originally developed by James Gosling at Sun Microsystems (which has
since merged into Oracle Corporation) and released in 1995 as a core component of Sun
Microsystems' Java platform. The language derives much of its syntax from C and C++, but it
has fewer low-level facilities than either of them.

The original and reference implementation Java compilers, virtual machines, and class
libraries were developed by Sun from 1991 and first released in 1995. As of May 2007, in
compliance with the specifications of the Java Community Process, Sun relicensed most of its Java
technologies under the GNU General Public License. Others have also developed alternative
implementations of these Sun technologies, such as the GNU Compiler for Java (byte code
compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

PRINCIPLES

 It should be "simple, object-oriented and familiar"



 It should be "robust and secure"

 It should be "architecture-neutral and portable"

 It should execute with "high performance"

 It should be "interpreted, threaded, and dynamic"

21
JAVA PLATFORM

One characteristic of Java is portability, which means that computer programs written in
the Java language must run similarly on any hardware/operating-system platform. This is
achieved by compiling the Java language code to an intermediate representation called Java byte
code, instead of directly to platform-specific machine code.

IMPLEMENTATIONS

Oracle Corporation is the current owner of the official implementation of the Java SE
platform, following their acquisition of Sun Microsystems on January 27, 2010. This
implementation is based on the original implementation of Java by Sun. The Oracle
implementation is available for Mac OS X, Windows and Solaris. Because Java lacks any formal
standardization recognized by Ecma International, ISO/IEC, ANSI, or other third-party standards
organization, the Oracle implementation is the de facto standard.

The Oracle implementation is packaged into two different distributions: The Java
Runtime Environment (JRE) which contains the parts of the Java SE platform required to run
Java programs and is intended for end-users, and the Java Development Kit (JDK), which is
intended for software developers and includes development tools such as the Java compiler,
Javadoc, Jar, and a debugger.

OpenJDK is another notable Java SE implementation that is licensed under the GPL. The
implementation started when Sun began releasing the Java source code under the GPL. As of
Java SE 7, OpenJDK is the official Java reference implementation.

The goal of Java is to make all implementations of Java compatible. Historically, Sun's
trademark license for usage of the Java brand insists that all implementations be "compatible". This
resulted in a legal dispute with Microsoft after Sun claimed that the Microsoft implementation did
not support RMI or JNI and had added platform-specific features of their own. Sun sued in 1997, and
in 2001 won a settlement of US$20 million, as well as a court order enforcing the terms of the license
from Sun. As a result, Microsoft no longer ships Windows with Java.

22
FEATURES OF JAVA

 Simple

 Object Oriented

 Platform Independent

 Robust

 Distributed

 Multithreaded

 Dynamic

SIMPLE

Java Language constructs are easy to learn and use. It takes care of memory management.
Through Java was developed from C++, the complexities associated with C++ have been
eliminated in JAVA.

OBJECT-ORIENTED

Java is designed around the object-oriented model. So, in Java the focus is on the ‘data’
and the ‘methods’ that operate on the data in an application and not just on the procedures. The
data and methods together describe the state and the behaviour of an object in Java.

ROBUST

Java us robust language since it has strict compile time and run time checking of code.
This minimizes programming errors. Error handling and recovery is taken care of in Java by the
‘exception-handling’ features.

DISTRIBUTED

Java can be used to develop applications that are portable across multiple platforms,
operating systems and graphical user interfaces. Java is designed to support network application.
Thus Java is widely used tool in an environment like the Internet where there are different
platforms.

23
MULTITHREADED

Java program can do many tasks simultaneously by a process called ‘multithreading’.


Java provides the master solution for synchronizing multiple processes; therefore, interactive
application on the net can run smoothly. This is made possible by the built-in support for threads.

DYNAMIC

Java is a dynamic language designed to adapt to an evolving environment, Java programs


carry a lot of run-time information to validate and access objects at run-time. This makes it
possible to safely link code dynamically.

HTML

Hyper Text Markup Language, commonly referred to as HTML, is the standard markup
language used to create web pages. It is written in the form of HTML elements consisting of tags
enclosed in angle brackets (like <html>). HTML tags most commonly come in pairs like <h1>
and </h1> although some tags represent empty elements and so are unpaired, for example <img>.
The first tag in a pair is the start tag, and the second tag is the end tag (they are also called
opening tags and closing tags).

Web browsers can read HTML files and compose them into visible or audible web pages.
Browsers do not display the HTML tags and scripts, but use them to interpret the content of the
page. HTML describes the structure of a website semantically along with cues for presentation,
making it a markup language, rather than a programming language.

HTML elements form the building blocks of all websites. HTML allows images and
objects to be embedded and can be used to create interactive forms. It provides a means to create
structured documents by denoting structural semantics for text such as headings, paragraphs,
lists, links, quotes and other items. It can embed scripts written in languages such as JavaScript
which affect the behavior of HTML web pages.

24
CSS(Cascading Style Sheets )

Cascading Style Sheets (CSS) is a style sheet language used for describing the look and
formatting of a document written in a markup language. While most often used to change the style of
web pages and user interfaces written in HTML and XHTML, the language can be applied to any
kind of XML document, including plain XML, SVG and XUL. Along with HTML and JavaScript,
CSS is a cornerstone technology used by most websites to create visually engaging webpages, user
interfaces for web applications, and user interfaces for many mobile applications .

CSS is designed primarily to enable the separation of document content from document
presentation, including elements such as the layout, colors, and fonts. This separation can
improve content accessibility, provide more flexibility and control in the specification of
presentation characteristics, enable multiple HTML pages to share formatting by specifying the
relevant CSS in a separate .css file, and reduce complexity and repetition in the structural
content, such as semantically insignificant tables that were widely used to format pages before
consistent CSS rendering was available in all major browsers. CSS makes it possible to separate
presentation instructions from the HTML content in a separate file or style section of the HTML
file. For each matching HTML element, it provides a list of formatting instructions. For example,
a CSS rule might specify that "all heading 1 elements should be bold," leaving pure semantic
HTML markup that asserts "this text is a level 1 heading" without formatting code such as a
<bold> tag indicating how such text should be displayed.

JavaScript

JavaScript is a dynamic computer programming language. It is most commonly used as


part of Web browsers, whose implementations allow client-side scripts to interact with the user,
control the browser, communicate asynchronously, and alter the document content that is
displayed. It is also used in server-side network programming with runtime environments such as
Node.js, game development and the creation of desktop and mobile applications. With the rise of
the single-page Web app and JavaScript-heavy sites, it is increasingly being used as a compile
target for source-to-source compilers from both dynamic languages and static languages.

25
In particular, Emscripten and highly optimised JIT compilers, in tandem with asm.js that
is friendly to AOT compilers like OdinMonkey, have enabled C and C++ programs to be
compiled into JavaScript and execute at near-native speeds, making JavaScript be considered the
"assembly language of the Web", according to its creator and others.

2.4.1.4 WINDOWS (OPERATING SYSTEM)

Windows OS, computer operating system (OS) developed by Microsoft Corporation to


run personal computers (PCs). Featuring the first graphical user interface (GUI) for IBM-
compatible PCs, the Windows OS soon dominated the PC market. Approximately 90 percent of
PCs run some version of Windows.

The first version of Windows, released in 1985, was simply a GUI offered as an
extension of Microsoft’s existing disk operating system, or MS-DOS. Based in part on licensed
concepts that Apple Inc. had used for its Macintosh System Software, Windows for the first time
allowed DOS users to visually navigate a virtual desktop, opening graphical “windows”
displaying the contents of electronic folders and files with the click of a mouse button, rather
than typing commands and directory paths at a text prompt.

FEATURES

1) Windows Easy Transfer : One of the first things you might want to do is to transfer your files
and settings from your old computer to the brand new computer. You can do this using an Easy
Transfer Cable, CDs or DVDs, a USB flash drive, a network folder, or an external hard disk.

2) Windows Anytime Upgrade : This feature of Windows Operating System allows you to
upgrade to any higher windows version available for your system, so you can take full advantage
of enhanced digital entertainment and other features.

3) Windows Basics : If you are new to Windows or want to refresh your knowledge about areas
such as security or working with digital pictures, this features will help you to get started.

26
4) Searching and Organizing : Most folders in Windows have a search box in the upper- right
corner. To find a file in a folder, type a part of the file name in the search box.

5) Parental Controls : Parental Controls give you the means to decide when your children use
the computer, which website they visit, and which games they are allowed to play. You can also
get reports of your children's computer activity as well.

6) Ease of Access Center : Ease of Access Center is the place to find and change settings that can
enhance how you hear, see and use your computer. You can adjust text size and the speed of your
mouse. This is also where you can go to set up your screen reader and find other helpful tools.

7) Default Programs : This is a features of your Windows Operating System where you can
adjust and set your default programs, associate a file type or a protocol with a program, change
and set auto play settings, set program access and computer defaults.

8) Remote Desktop Connection : This features helps a user with a graphical user interface to
another computer. It is a proprietary protocol developed by Microsoft specially for Windows
Operating System. Basically by entering the IP address of the other computer you can directly
see that computer's desktop right on to your desktop.

SYSTEM TESTING

Testing Overview
It depends on the process and the associated stakeholders of the project(s). In the IT
industry, large companies have a team with responsibilities to evaluate the developed software
in context of the given requirements. Moreover, developers also conduct testing which is called
Unit Testing. In most cases, the following professionals are involved in testing a system within
their respective capacities:

 Software Tester

 Software Developer

 Project Lead/Manager

 End User

27
Different companies have different designations for people who test the software on the
basis of their experience and knowledge such as Software Tester, Software Quality Assurance
Engineer, QA Analyst, etc.

It is not possible to test the software at any time during its cycle. The next two sections
state when testing should be started and when to end it during the SDLC.

When to Start Testing


An early start to testing reduces the cost and time to rework and produce error-free
software that is delivered to the client. However in Software Development Life Cycle (SDLC),
testing can be started from the Requirements Gathering phase and continued till the deployment
of the software. It also depends on the development model that is being used. For example, in
the Waterfall model, formal testing is conducted in the testing phase; but in the incremental
model, testing is performed at the end of every increment/iteration and the whole application is
tested at the end. Testing is done in different forms at every phase of SDLC:

 During the requirement gathering phase, the analysis and verification of requirements are
also considered as testing.

 Reviewing the design in the design phase with the intent to improve the design is also
considered as testing.

 Testing performed by a developer on completion of the code is also categorized as testing.

When to Stop Testing?


It is difficult to determine when to stop testing, as testing is a never-ending process and
no one can claim that a software is 100% tested. The following aspects are to be considered for
stopping the testing process:

 Testing Deadlines

 Completion of test case execution

 Completion of functional and code coverage to a certain point

 Bug rate falls below a certain level and no high-priority bugs are identified

 Management decision

28
Verification & Validation
These two terms are very confusing for most people, who use them interchangeably. The
following table highlights the differences between verification and validation

S.No Verification Validation

1 Verification addresses the Validation addresses the


concern: "Are you building it concern: "Are you building the
right?" right thing?"

2 Ensures that the software system Ensures that the functionalities


meets all the functionality. meet the intended behavior.

3 Verification takes place first and Validation occurs after


includes the checking for verification and mainly involves
documentation, code, etc. the checking of the overall
product.

4 Done by developers. Done by testers.

5 It has static activities, as it It has dynamic activities, as it


includes collecting reviews, includes executing the software
walkthroughs, and inspections to against the requirements.
verify a software.

6 It is an objective process and no It is a subjective process and


subjective decision should be involves subjective decisions on
needed to verify a software. how well a software works.

29
QA, QC, TESTING

Most people get confused when it comes to pin down the differences among Quality
Assurance, Quality Control, and Testing. Although they are interrelated and to some extent,
they can be considered as same activities, but there exist distinguishing points that set them
apart. The following table lists the points that differentiate QA, QC, and Testing.

Quality Assurance Quality Control Testing

QA includes activities that It includes activities that It includes activities that


ensure the implementation of ensure the verification of a ensure the identification
processes, procedures and developed software with of bugs/error/defects in a
standards in context to respect to documented (or not software.
verification of developed in some cases) requirements.
software and intended
requirements.

Focuses on processes and Focuses on actual testing by Focuses on actual testing.


procedures rather than executing the software with
conducting actual testing on the an aim to identify bug/defect
system. through implementation of
procedures and process.

Process-oriented activities. Product-oriented activities. Product-oriented


activities.

Preventive activities. It is a corrective process. It is a preventive process.

It is a subset of Software Test QC can be considered as the Testing is the subset of


Life Cycle (STLC). subset of Quality Assurance. Quality Control.

30
Testing and Debugging
Testing : It involves identifying bug/error/defect in a software without correcting it.
Normally professionals with a quality assurance background are involved in bugs identification.
Testing is performed in the testing phase.

Debugging : It involves identifying, isolating, and fixing the problems/bugs. Developers


who code the software conduct debugging upon encountering an error in the code. Debugging is
a part of White Box Testing or Unit Testing. Debugging can be performed in the development
phase while conducting Unit Testing or in phases while fixing the reported bugs.

This section describes the different types of testing that may be used to test a software
during SDLC.

TYPES OF TESTING

Manual Testing
Manual testing includes testing a software manually, i.e., without using any automated
tool or any script. In this type, the tester takes over the role of an end-user and tests the software
to identify any unexpected behavior or bug. There are different stages for manual testing such as
unit testing, integration testing, system testing, and user acceptance testing.

Testers use test plans, test cases, or test scenarios to test a software to ensure the
completeness of testing. Manual testing also includes exploratory testing, as testers explore the
software to identify errors in it.

Automation Testing
Automation testing, which is also known as Test Automation, is when the tester writes
scripts and uses another software to test the product. This process involves automation of a
manual process. Automation Testing is used to re-run the test scenarios that were performed
manually, quickly, and repeatedly.

31
Apart from regression testing, automation testing is also used to test the application from
load, performance, and stress point of view. It increases the test coverage, improves accuracy,
and saves time and money in comparison to manual testing.

Automate
It is not possible to automate everything in a software. The areas at which a user can
make transactions such as the login form or registration forms, any area where large number of
users can access the software simultaneously should be automated.

Furthermore, all GUI items, connections with databases, field validations, etc. can be
efficiently tested by automating the manual process.

METHODS USED IN TESTING

1. Black-Box Testing
The technique of testing without having any knowledge of the interior workings of the
application is called black-box testing. The tester is oblivious to the system architecture and
does not have access to the source code. Typically, while performing a black-box test, a tester
will interact with the system's user interface by providing inputs and examining outputs without
knowing how and where the inputs are worked upon.

32
The following table lists the advantages and disadvantages of black-box testing.

Advantages Disadvantages

 Well suited and efficient for large  Limited coverage, since only a
code segments. selected number of test scenarios
 Code access is not required. is actually performed.
 Clearly separates user's  Inefficient testing, due to the fact
perspective from the developer's that the tester only has limited
perspective through visibly knowledge about an application.
defined roles.  Blind coverage, since the tester
cannot target specific code
segments or error-prone areas.
 The test cases are difficult to
 Large numbers of moderately
design.
skilled testers can test the
application with no knowledge of
implementation, programming
language, or operating systems.

2. White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the code.
White-box testing is also called glass testing or open-box testing. In order to perform white-
box testing on an application, a tester needs to know the internal workings of the code.

The tester needs to have a look inside the source code and find out which unit/chunk of
the code is behaving inappropriately.

33
The following table lists the advantages and disadvantages of white-box testing.

Advantages Disadvantages

 As the tester has knowledge of  Due to the fact that a skilled


the source code, it becomes tester is needed to perform
very easy to find out which white-box testing, the costs are
type of data can help in testing increased.
the application effectively.  Sometimes it is impossible to
 It helps in optimizing the code. look into every nook and corner
 Extra lines of code can be to find out hidden errors that
removed which can bring in may create problems, as many
hidden defects. paths will go untested.
 Due to the tester's knowledge  It is difficult to maintain white-
about the code, maximum box testing, as it requires
coverage is attained during test specialized tools like code
scenario writing. analyzers and debugging tools.

3. Grey-Box Testing
Grey-box testing is a technique to test the application with having a limited knowledge of
the internal workings of an application. In software testing, the phrase the more you know, the
better carries a lot of weight while testing an application.

Mastering the domain of a system always gives the tester an edge over someone with
limited domain knowledge. Unlike black-box testing, where the tester only tests the
application's user interface; in grey-box testing, the tester has access to design documents and
the database. Having this knowledge, a tester can prepare better test data and test scenarios
while making a test plan.

34
Advantages Disadvantages

 Offers combined benefits of  Since the access to source code is


black-box and white-box testing not available, the ability to go over
wherever possible. the code and test coverage is
 Grey box testers don't rely on the limited.
source code; instead they rely on  The tests can be redundant if the
interface definition and software designer has already run a
functional specifications. test case.
 Based on the limited information  Testing every possible input stream
available, a grey-box tester can is unrealistic because it would take
design excellent test scenarios an unreasonable amount of time;
especially around therefore, many program paths will
communication protocols and go untested.
data type handling.
 The test is done from the point of
view of the user and not the
designer.

TESTING LEVELS

Functional Testing
This is a type of black-box testing that is based on the specifications of the software that
is to be tested. The application is tested by providing input and then the results are examined
that need to conform to the functionality it was intended for. Functional testing of a software is
conducted on a complete, integrated system to evaluate the system's compliance with its
specified requirements.

There are five steps that are involved while testing an application for functionality.

35
Steps Description

I The determination of the functionality that the intended application


is meant to perform.

II The creation of test data based on the specifications of the


application.

III The output based on the test data and the specifications of the
application.

IV The writing of test scenarios and the execution of test cases.

V The comparison of actual and expected results based on the


executed test cases.

An effective testing practice will see the above steps applied to the testing policies of
every organization and hence it will make sure that the organization maintains the strictest of
standards when it comes to software quality.

Unit Testing
This type of testing is performed by developers before the setup is handed over to the
testing team to formally execute the test cases. Unit testing is performed by the respective
developers on the individual units of source code assigned areas. The developers use test data
that is different from the test data of the quality assurance team.

The goal of unit testing is to isolate each part of the program and show that individual
parts are correct in terms of requirements and functionality.

36
Limitations of Unit Testing
Testing cannot catch each and every bug in an application. It is impossible to evaluate
every execution path in every software application. The same is the case with unit testing.

There is a limit to the number of scenarios and test data that a developer can use to verify a
source code. After having exhausted all the options, there is no choice but to stop unit testing
and merge the code segment with other units.

Integration Testing
Integration testing is defined as the testing of combined parts of an application to
determine if they function correctly. Integration testing can be done in two ways: Bottom-up
integration testing and Top-down integration testing.

S.No Integration Testing Method

1 Bottom-up integration

This testing begins with unit testing, followed by tests of


progressively higher-level combinations of units called modules or
builds.

2 Top-down integration

In this testing, the highest-level modules are tested first and


progressively, lower-level modules are tested thereafter.

In a comprehensive software development environment, bottom-up testing is usually


done first, followed by top-down testing. The process concludes with multiple tests of the
complete application, preferably in scenarios designed to mimic actual situations.

37
4. System Testing
System testing tests the system as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it meets the specified Quality Standards.
This type of testing is performed by a specialized testing team.

System testing is important because of the following reasons:

 System testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.

 The application is tested thoroughly to verify that it meets the functional and technical
specifications.

 The application is tested in an environment that is very close to the production
environment where the application will be deployed.

 System testing enables us to test, verify, and validate both the business requirements as
well as the application architecture.

5. Regression Testing
Whenever a change in a software application is made, it is quite possible that other areas
within the application have been affected by this change. Regression testing is performed to
verify that a fixed bug hasn't resulted in another functionality or business rule violation. The
intent of regression testing is to ensure that a change, such as a bug fix should not result in
another fault being uncovered in the application.

Regression testing is important because of the following reasons:

 Minimize the gaps in testing when an application with changes made has to be tested.

 Testing the new changes to verify that the changes made did not affect any other area of
the application.

 Mitigates risks when regression testing is performed on the application.

 Test coverage is increased without compromising timelines.

 Increase speed to market the product.

38
6. Acceptance Testing
This is arguably the most important type of testing, as it is conducted by the Quality
Assurance Team who will gauge whether the application meets the intended specifications and
satisfies the client’s requirement. The QA team will have a set of pre-written scenarios and test
cases that will be used to test the application.

More ideas will be shared about the application and more tests can be performed on it to
gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only
intended to point out simple spelling mistakes, cosmetic errors, or interface gaps, but also to point
out any bugs in the application that will result in system crashes or major errors in the application.

By performing acceptance tests on an application, the testing team will deduce how the
application will perform in production. There are also legal and contractual requirements for
acceptance of the system.

7. Non-Functional Testing
This section is based upon testing an application from its non-functional attributes. Non-
functional testing involves testing a software from the requirements which are nonfunctional in
nature but important such as performance, security, user interface, etc.

Some of the important and commonly used non-functional testing types are discussed
below.

8. Performance Testing
It is mostly used to identify any bottlenecks or performance issues rather than finding bugs
in a software. There are different causes that contribute in lowering the performance of a software:

 Network delay

 Client-side processing

 Database transaction processing

 Load balancing between servers

 Data rendering

Performance testing is considered as one of the important and mandatory testing type in
terms of the following aspects:

39
 Speed (i.e. Response Time, data rendering and accessing)

 Capacity

 Stability

 Scalability

Performance testing can be either qualitative or quantitative and can be divided into
different sub-types such as Load testing and Stress testing.

9. Load Testing
It is a process of testing the behavior of a software by applying maximum load in terms
of software accessing and manipulating large input data.

It can be done at both normal and peak load conditions. This type of testing identifies
the maximum capacity of software and its behavior at peak time.

Most of the time, load testing is performed with the help of automated tools such as Load
Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer, Visual
Studio Load Test, etc.

Virtual users (VUsers) are defined in the automated testing tool and the script is executed
to verify the load testing for the software. The number of users can be increased or decreased
concurrently or incrementally based upon the requirements.

10. Stress Testing


Stress testing includes testing the behavior of a software under abnormal conditions. For
example, it may include taking away some resources or applying a load beyond the actual load
limit.

The aim of stress testing is to test the software by applying the load to the system and
taking over the resources used by the software to identify the breaking point. This testing can be
performed by testing different scenarios such as:

 Shutdown or restart of network ports randomly



 Turning the database on or off

 Running different processes that consume resources such as CPU, memory, server, etc.

40
11. Usability Testing
Usability testing is a black-box technique and is used to identify any error(s) and
improvements in the software by observing the users through their usage and operation.

According to Nielsen, usability can be defined in terms of five factors, i.e. efficiency of
use, learn-ability, memory-ability, errors/safety, and satisfaction. According to him, the usability
of a product will be good and the system is usable if it possesses the above factors.

Nigel Bevan and Macleod considered that usability is the quality requirement that can be
measured as the outcome of interactions with a computer system. This requirement can be
fulfilled and the end-user will be satisfied if the intended goals are achieved effectively with the
use of proper resources.

Molich in 2000 stated that a user-friendly system should fulfill the following five goals, i.e.,
easy to Learn, easy to remember, efficient to use, satisfactory to use, and easy to understand.

In addition to the different definitions of usability, there are some standards and quality models
and methods that define usability in the form of attributes and sub-attributes such as ISO-9126,
ISO-9241-11, ISO-13407, and IEEE std.610.12, etc.

12. Security Testing


Security testing involves testing a software in order to identify any flaws and gaps from
security and vulnerability point of view.

Listed below are the main aspects that security testing should ensure:

 Confidentiality

 Integrity

 Authentication

 Availability

 Authorization

 Non-repudiation

 Software is secure against known and unknown vulnerabilities

 Software data is secure

41
 Software is according to all security regulations

 Input checking and validation

 SQL insertion attacks

 Injection flaws

 Session management issues

 Cross-site scripting attacks

 Buffer overflows vulnerabilities

 Directory traversal attacks

SOFTWARE TESTING DOCUMENTATION

Testing documentation involves the documentation of artifacts that should be developed


before or during the testing of Software.

Documentation for software testing helps in estimating the testing effort required, test
coverage, requirement tracking/tracing, etc. This section describes some of the commonly used
documented artifacts related to software testing such as:

 Test Plan

 Test Scenario

 Test Case

 Traceability Matrix

Test Scenario
It is a one line statement that notifies what area in the application will be tested. Test
scenarios are used to ensure that all process flows are tested from end to end. A particular area
of an application can have as little as one test scenario to a few hundred scenarios depending on
the magnitude and complexity of the application.

The terms 'test scenario' and 'test cases' are used interchangeably, however a test scenario
has several steps, whereas a test case has a single step. Viewed from this perspective, test
scenarios are test cases, but they include several test cases and the sequence that they should be
executed.

42
Apart from this, each test is dependent on the output from the previous test.

Test Case
Test cases involve a set of steps, conditions, and inputs that can be used while
performing testing tasks. The main intent of this activity is to ensure whether a software passes
or fails in terms of its functionality and other aspects. There are many types of test cases such as
functional, negative, error, logical test cases, physical test cases, UI test cases, etc.

Furthermore, test cases are written to keep track of the testing coverage of a software.
Generally, there are no formal templates that can be used during test case writing. However, the
following components are always available and included in every test case:

 Test case ID



 Product module

 Product version

 Revision history

 Purpose

 Assumptions

 Pre-conditions

 Steps

43
 Expected outcome

 Actual outcome

 Post-conditions

Many test cases can be derived from a single test scenario. In addition, sometimes
multiple test cases are written for a single software which are collectively known as test suites.

Test Point Analysis


This estimation process is used for function point analysis for black-box or acceptance
testing. The main elements of this method are: Size, Productivity, Strategy, Interfacing,
Complexity, and Uniformity.

44
CHAPTER 5

DESIGN AND IMPLEMENTATION

5.1 ARCHITECTURE DIAGRAM :

Fig.5.1 End to End Encryption in Edge Computing Architecture

45
5.2 UML DIAGRAMS:

fig. 5.2.1 Data Flow Diagram

46
fig. 5.2.2 Sequence Diagram

47
fig. 5.2.3 Use Case Diagram

48
fig. 5.2.4 Class Diagram

49
5.3 List of Modules :
1. Create Signature
2. Cloud manipulation
3. End to end encryption
4. Local cloud

5.3.1 Create Signature :

Digital signatures are the public-key primitives of message authentication. In the


physical world, it is common to use handwritten signatures on handwritten or typed messages.
They are used to bind signatory to the message. Similarly, a digital signature is a technique that
binds a person/entity to the digital data. This binding can be independently verified by receiver
as well as any third party.

Digital signature is a cryptographic value that is calculated from the data and a secret key
known only by the signer.

In our project while connecting IoT devices we create signatures by using cryptographic
technique this data is transmitted to IoT devices. In the edge computing platforms IoT devices
itself has a key decrypt and read the original data.

5.3.2 Cloud Manipulation :

Cloud Manipulation is the process of communicating main cloud to edge cloud . the main
cloud contains all the data of the application. These all the data’s will not needed frequently or
not mandatory. The edge computing platform itself has local cloud which stores frequently
needed data’s.it serves to IoT devices whenever needed. To serve data efficiently it stores and
indexes all the data’s.

50
5.3.3 End to End Encryption :

End-to-end encryption ensures only you and the person you're communicating with can
read what's sent, and nobody in between. Your messages are secured with locks, and only the
recipient and you have the special keys needed to unlock and read your messages.

This technique we used in edge computing platforms when the communication of edge local
cloud and iot devices.

5.3.4 Local Cloud :

Edge computing is named as local cloud. Because it’s a type cloud which serves to IoT
devices. Its very less in size as well as computing power. This contains only mandatory data’s
which are frequently needed to the IoT devices like data header, action command and etc.

CHAPTER 6
CONCLUSION AND FUTURE ENHANCEMENT
6.1 CONCLUSION :

We used end to end encryption which is most preferable encryption standard now days
because end-to-end encryption has recently received an increasing attention as a way to protect
against such threats. End-to-end (E2E) encryption preserve the confidentiality of data on the wire
as well as from service providers by performing encryption/decryption at clients keeping the
keys strictly within client devices.it gives an assurance authorized users or devices only
accessing the data.

51
Coding :

GetSignature :
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package actions;

import java.io.IOException;
import java.io.PrintWriter;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import pack.Dbconnection;

/**
*
* @author
nadanapathy */
public class GetSignature extends HttpServlet {

/**
* Processes requests for both HTTP
* <code>GET</code> and
* <code>POST</code> methods.
*
* @param request servlet request
* @param response servlet response

52
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
protected void processRequest(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
// PrintWriter out = response.getWriter();
try {
System.out.println("image executed...id :"+request.getParameter("id"));

Connection conn = Dbconnection.getConn();

PreparedStatement stmt = conn.prepareStatement("select pimage from product where


pid=?");
stmt.setString(1, request.getParameter("id"));
ResultSet rs = stmt.executeQuery();
System.out.println("executed...");
if (rs.next()) {

response.getOutputStream().write(rs.getBytes("pimage"));
rs.close();
response.getOutputStream().close();
System.out.println("executed1...");
// response.getWriter().close();
}
conn.close();
} catch (Exception e) {

System.out.println(e);
}
finally {
//out.close();
}

53
}

// <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on


the left to edit the code.">
/**
* Handles the HTTP
* <code>GET</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
processRequest(request, response);
}

/**
* Handles the HTTP
* <code>POST</code> method.
*
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
processRequest(request, response);
}

54
/**
* Returns a short description of the servlet.
*
* @return a String containing servlet
description */
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>
}

Connect devices:
<%@page import="java.net.URLDecoder"%>
<%@page import="java.util.Random"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"
/> <title>CJ</title>
<link href="css/styles.css" rel="stylesheet" type="text/css" />
<link rel="shortcut icon" type="image/x-icon" href="images/finger.png"/>
<script type="text/javascript" src="js/jquery-1.9.1.js"></script>
<script type="text/javascript" src="check.js"></script> <script
type="text/javascript" src="js/jquery-ui.js"></script> <script
type="text/javascript" src="js/carouselScript.js"></script> <link
href="css/carousel.css" rel="stylesheet" type="text/css">
<script type="text/javascript" src="js/jquery-1.10.2.js"></script>

<style>
#id{
width: 200px;

55
height: 25px;
background-color: #D5D5D5;
}
#but{
width: 60px;
height: 25px;
}
#disp{
width: 100px;
color: #000;
}
#disptr{
height: 30px;
}
#notetd{
width: 100px;
font-size: 20px;
color: black;
}
#notetr{
height: 100px;
}

#notetd,#notetr,#notetable{
border-style: solid;
}
</style>
<script>
var pin,pin1,pin2,pin3,pin4;var name;
function getpin(pin_no){
pin=pin_no;
pin1=pin_no.charAt(0);
pin2=pin_no.charAt(1);

56
pin3=pin_no.charAt(2);
pin4=pin_no.charAt(3);
// document.getElementById('pinno').innerHTML=pin;
alert(pin);

function setname(n){
// alert('got name');
name=n;
sendInfo(n);
setcolor();
}
var click_count=0;

function hexToRgb(hex) {
var result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex);
return result ? {
r: parseInt(result[1], 16),
g: parseInt(result[2], 16),
b: parseInt(result[3], 16)
} : null;
}
//clicked color , required color
var status='true';
function matcher(cc,rc){
if(rc===cc){
//alert('true');
}
if(rc!==cc){
//alert('false');
status='false';
}

57
}

function result(){
// alert(status);
if(status==='true'){
document.getElementById('result').innerHTML='Correct Pin ! ';
// window.location("");
}
if(status==='false'){
document.getElementById('result').innerHTML='Incorrect Pin !refresh page and enter
again' ;
}
}

function clickpin(c){

click_count++;
// alert("click count"+click_count);
var clicked_color='rgb('+hexToRgb(c).r+", "+ hexToRgb(c).g+", "+
hexToRgb(c).b+')'; // alert('clicked'+ clicked_color );

switch(click_count){
case 1:
var c;
if(pin1!=='0'){
//alert('!=0');
c="c"+((pin1*4)-3);
}
if(pin1==='0'){
//alert('=0');

58
c="c37";
}
var d='#'+c;
var styleProps = $( d ).css(["background-color"]);
$.each( styleProps, function( prop, value ) {
var required_code=value;
// alert(required_code);
matcher(clicked_color,required_code);
});
break;

case 2:
var c;
if(pin2!=='0'){
// alert(pin2+'!=0');
c="c"+((pin2*4)-2);
}
if(pin2==='0'){
// alert('=0');
c="c38";
}
var d='#'+c;
var styleProps = $( d ).css(["background-color"]);
$.each( styleProps, function( prop, value ) {
var required_code=value;
//alert(required_code);
matcher(clicked_color,required_code);
});
break;

case 3:
var c;
if(pin3!=='0'){

59
//alert('!=0');
c="c"+((pin3*4)-1);
}
if(pin3==='0'){
//alert('=0');
c="c39";
}
var d='#'+c;
var styleProps = $( d ).css(["background-color"]);
$.each( styleProps, function( prop, value ) {
var required_code=value;
// alert(required_code);
matcher(clicked_color,required_code);
});
break;

case 4:
var c;
if(pin4!=='0'){
//alert('!=0');
c="c"+(pin4*4);
}
if(pin4==='0'){
// alert('=0');
c="c40";
}
var d='#'+c;
var styleProps = $( d ).css(["background-color"]);
$.each( styleProps, function( prop, value ) {
var required_code=value;
// alert(required_code);
matcher(clicked_color,required_code);
}); result();

60
break;

default:
alert('Invalid Pin..');
}

setcolor();
}

function setcolor(){

var
color=["#0066CC","#AA3300","#939191","#990066","#FFFFFF","#FFFF99","#CC3399","#
2EB847","#000","#00FFCC"];

// var x = Math.floor((Math.random() * 10) +


1); var cn=[];
for(var n=1;n<41;n++){

var nn=Math.floor((Math.random() * 10) + 1);


//cn.push(nn);
var c='c'+n;
//alert(c);
document.getElementById(c).style.backgroundColor = color[nn];
//alert(nn);
}

// var idnames =["c1","c2","c3","c4","c5","c6","c7","c8","c9","c10"];

// for(var i=0;i<10;i++){
//
// document.getElementById(idnames[i]).style.backgroundColor = color[i];

61
// }
//var c='c1';
// document.getElementById(c).style.backgroundColor = color[cn[2]];
}
</script>
</head>
<%
// HttpSession user=request.getSession(true);
String uname=URLDecoder.decode(request.getQueryString());

%>
<body onload="setname('<%=uname%>');">
<%
if(request.getParameter("status")!=null){
out.println("<script>alert('Registered')</script>");
}
// out.println("<script>alert('Registered')</script>");
// Random r=new Random();
// int[] c =new int[10];
// for(int i=0;i<10;i++){
// c[i]=r.nextInt(10);
// }

%>
<div class="header-wrap">
<div class="header">
<div class="logo">
<h1>Secure Computing</h1>
</div>
<div class="menu">
<ul>
<li><a href="index.html">Home</a></li>
<li><a href="user_login.jsp" class="active">User Login </a></li>

62
<li><a href="user_register.jsp">Registration </a></li>
<!-- <li><a href="work.html">Work</a></li> <li><a
href="contact.html">Contact</a></li>-->
</ul>
</div>
</div>
</div>
<!---header-wrap-end--->
<div class="banner-wrap">
<div class="banner">
<div class="banner-img">
<div id="carousel">

<!-- <h1>Welcome ! <%=uname%></h1><p id="pinno"></p>-->


<h1>Welcome ! </h1><p id="pinno"></p>
<fieldset style="position: absolute;left: 0px;top: 100px; width: 300px;border-style:
none;" >
<table id="notetable">
<caption><h1>Guide To Pick
color</h1></caption> <tr id="notetr">
<td id="notetd">
1st No<br>
Top Left
</td>
<td id="notetd" >
2nd No<br>
Top Right
</td>

</tr>
<tr id="notetr">
<td id="notetd">
3rd No<br>

63
Bottom Left
</td>
<td id="notetd">
4th No<br>
Bottom Right
</td>
</tr>
</table>
</fieldset>
<div style="margin-top: 30px;margin-left: 350px;">
<form>
<fieldset style="width: 310px">
<table >
<tr id="disptr">
<td>1</td><td>2</td><td>3</td>
</tr>
<tr id="disptr" >
<td id="disp">
<input type="text" id="c1" style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c2" style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c3" style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c4" style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
<td id="disp">
<input type="text" id="c5" style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c6"style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c7"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c8"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>

64
<td id="disp">
<input type="text" id="c9"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c10"style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c11"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text"id="c12" style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
</tr>
<tr id="disptr">
<td>4</td><td>5</td><td>6</td>
</tr>
<tr id="disptr">
<td>
<input type="text" id="c13"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text"id="c14" style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text"id="c15" style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c16"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
<td>
<input type="text" id="c17"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c18" style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c19"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c20"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
<td>
<input type="text" id="c21"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text"id="c22" style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>

65
<input type="text" id="c23"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c24"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
</tr>
<tr id="disptr">
<td>7</td><td>8</td><td>9</td>
</tr>
<tr id="disptr">
<td>
<input type="text" id="c25" style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c26"style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c27" style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c28"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
<td>
<input type="text" id="c29"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c30"style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c31"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c32"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
<td>
<input type="text" id="c33"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c34"style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c35"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c36"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>

66
</tr>
<tr id="disptr">
<td></td><td>0</td><td></td>
</tr>
<tr id="disptr">
<td>
</td>
<td>
<input type="text" id="c37"style="background-color: #0066CC;width:
20px;" readonly="readonly"></input><input type="text" id="c38"style="background-color:
#AA3300;width: 20px;" readonly="readonly"></input><br>
<input type="text" id="c39"style="background-color: #939191;width:
20px;" readonly="readonly"></input><input type="text" id="c40"style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
</td>
<td>
</td>
</tr>

</table>
</fieldset>
<br>

</br>
<table align="center" style="margin-left: 00px;" >
<caption><h2 style="color: #000;font-size: 20px;">Color
Picker</h2></caption>
<tr >
<td onclick="">
<div >
<input type="text" onclick="clickpin('#0066CC')" style="background-color:
#0066CC;width: 20px;" readonly="readonly"></input>

67
<input type="text" onclick="clickpin('#AA3300')" style="background-
color: #AA3300;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#939191')" style="background-color:
#939191;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#990066')" style="background-color:
#990066;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#FFFFFF')" style="background-color:
#FFFFFF;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#FFFF99')" style="background-color:
#FFFF99;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#CC3399')" style="background-color:
#CC3399;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#2EB847')" style="background-color:
#2EB847;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#000000')" style="background-color:
#000000;width: 20px;" readonly="readonly"></input>
<input type="text" onclick="clickpin('#00FFCC')" style="background-
color: #00FFCC;width: 20px;" readonly="readonly"></input>
</div>
</td>
</tr>
</table>
</form>
<div id="result" style="position: absolute;top: 200px;left: 800px;color:
#f11;"> haiiiii..
</div>

</div>
<div class="clear"></div>
<!-- <div id="buttons"> <a href="#" id="prev">prev</a> <a href="#" id="next">next</a>

<div class="clear"></div>
</div>-->

68
</div>

</div>

</div>
<div class="clearing"></div>
</div>
<!---banner-end--->
<div class="footer-wrap">
<div class="footer">
<div class="content mar-right30">

<div class="clearing"></div>
</div>
<div class="clearing"></div>
</div>
<div class="clearing"></div>
<div class="copyright-wrap">
<!-- <div class="copyright">
<div class="content">
<p>Copyright (c) websitename. All rights reserved. < <a
href="www.alltemplateneeds.com" class="active">www.alltemplateneeds.com </a>>
Images From: <a href="www.photorack.net">www.photorack.net</a></p>
</div>
</div>-->
</div>
</body>
</html>

69
SCREEN SHOTS :

LOGIN PAGE OF DRIVE HQ:

70
LOGIN PAGE OF EDGE CLOUD:

71
LOGIN PAGE OF IoT DEVICES(SYSTEM):

72
LOGIN PAGE OF IoT DEVICES(MOBILE):

73
References :

[1] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, ‘‘Edge computing: Vision and challenges,’’
IEEE Internet Things J., vol. 3, no. 5, pp. 637–646, Oct. 2016
[2] F. Bonomi, R. A. Milito, J. Zhu, and S. Addepalli, ‘‘Fog computing and its role in the
Internet of Things,’’ in Proc. 1st Ed. MCC Workshop Mobile Cloud Comput., Helsinki, Finland,
Aug. 2012, pp. 13–16.
[3] A. Sahai and B. Waters, ‘‘Fuzzy identity-based encryption,’’ in Advances Cryptology—
EUROCRYPT (Lecture Notes in Computer Science). Aarhus, Denmark: Springer, May 2005,
pp. 457–473.
[4] M. Green, S. Hohenberger, and B. Waters, ‘‘Outsourcing the decryption of ABE
ciphertexts,’’ in Proc. 20th USENIX Secur. Symp. San Francisco, CA, USA: USENIX Assoc.,
Aug. 2011, pp. 1–16.
[5] J. Lai, R. H. Deng, C. Guan, and J. Weng, ‘‘Attribute-based encryption with verifiable
outsourced decryption,’’ IEEE Trans. Inf. Forensics Security, vol. 8, no. 8, pp. 1343–1354, Aug.
2013.
[6] (2016). OpenFog Architecture Overview. OpenFog Consortium Architecture Working
Group. Accessed on Dec. 7, 2016. [Online]. Available: http://www.openfogconsortium.org/wp-
content/ uploads/OpenFog-Architecture-Overview-WP-2-2016.pdf
[7] F. Bonomi. Connected vehicles, the internet of things, and fog computing. VANET 2011,
2011.
[8] Matthew Green, Susan Hohenberger, and Brent Waters. Outsourcing the decryption of
ABE ciphertexts, 2011. The full version of this paper is available from the Cryptology ePrint
Archive.
[9] Craig Gentry and Shai Halevi. Implementing Gentry’s fully-homomorphic encryption
scheme. In EUROCRYPT, pages 129–148, 2011.
[10] Dan Boneh, Amit Sahai, and Brent Waters. Functional encryption: Definitions and
challenges. In TCC, pages 253–273, 2011.
[11] T. D. Dang, D. Hoang, and P. Nanda, "Data Mobility Management Model for Active
Data Cubes," in The 14th IEEE International Conference on Trust, Security and Privacy in
Computing and Communications, pp. 750-757, 2015.

74
[12] M. S. de Brito et al., ``A service orchestration architecture for fog enabled infrastructures,''
in Proc. 2nd Int. Conf. Fog Mobile Edge Com-put. (FMEC), May 2017, pp. 127132.

[13] L. Gao, T. H. Luan, B. Liu, W. Zhou, and S. Yu, ``Fog computing and its applications in
5G,'' in Proc. 5G Mobile Commun. Cham, Switzerland: Springer, 2017, pp. 571593.
[14] H. Khazaei, H. Bannazadeh, and A. Leon-Garcia, ``End-to-end management of IoT
applications,'' in Proc. IEEE Conf. Netw. Softw. (NetSoft), Jul. 2017, pp. 13.
[15] R. Roman, J. Lopez, and M. Mambo, ``Mobile edge computing, Fog et al.: A survey and
analysis of security threats and challenges,'' Future Gener. Comput. Syst., vol. 78, pp. 680698,
Jan. 2018.

75

You might also like