Professional Documents
Culture Documents
1. INTRODUCTION
Cloud computing is also facing many challenges that, if not well resolved, may
impede its fast growth. Data security, as it exists in many other applications, is among
these challenges that would raise great concerns from users when they store sensitive
information on cloud servers. These concerns originate from the fact that cloud servers
are usually operated by commercial providers which are very likely to be outside of the
trusted domain of the users. Data confidential against cloud servers is hence frequently
desired when users outsource data for storage in the cloud. In some practical application
systems, data confidentiality is not only a security/privacy issue, but also of juristic
concerns. For example, in healthcare application scenarios use and disclosure of
protected health information (PHI) should meet the requirements of Health Insurance
Portability and Accountability Act (HIPAA) [5], and keeping user data confidential against
the storage servers is not just an option, but a requirement. Furthermore, we observe that
there are also cases in which cloud users themselves are content providers. They publish
data on cloud servers for sharing and need fine-grained data access control in terms of
which user (data consumer) has the access privilege to which types of data. In the
healthcare case, for example, a medical center would be the data owner who stores
millions of healthcare records in the cloud. It would allow data consumers such as
doctors, patients, researchers and etc, to access various types of healthcare records under
policies admitted by HIPAA. To enforce these access policies, the data owners on one
hand would like to take advantage of the abundant resources that the cloud provides for
efficiency and economy; on the other hand, they may want to keep the data contents
confidential against cloud servers.
We address this open issue and propose a secure and scalable fine-grained data
access control scheme for cloud computing. Our proposed scheme is partially based on
our observation that, in practical application scenarios each data file can be associated
with a set of attributes which are meaningful in the context of interest. The access
structure of each user can thus be defined as a unique logical expression over these
attributes to reflect the scope of data files that the user is allowed to access.
As the logical expression can represent any desired data file set, fine-grainedness
of data access control is achieved. To enforce these access structures, we define a public
key component for each attribute. Data files are encrypted using public key components
corresponding to their attributes. User secret keys are defined to reflect their access
structures so that a user is able to decrypt a cipher text if and only if the data file
attributes satisfy his access structure. Such a design also brings about the efficiency
benefit, as compared to previous works, in that, 1) the complexity of encryption is just
related the number of attributes associated to the data file, and is independent to the
number of users in the system and 2) data file creation/deletion and new user grant
operations just affect current file/user without involving system-wide data file update or
[23]
re-keying . One extremely challenging issue with this design is the implementation of
user revocation, which would inevitably require re-encryption of data files accessible to
the leaving user, and may need update of secret keys for all the remaining users. If all
these tasks are performed by the data owner himself/herself, it would introduce a heavy
computation overhead on him/her and may also require the data owner to be always
online. To resolve this challenging issue, our proposed scheme enables the data owner to
delegate tasks of data file re-encryption and user secret key update to cloud servers
without disclosing data contents or user access privilege information. We achieve our
design goals by exploiting a novel cryptographic primitive, namely key policy attribute-
based encryption.
2. SYSTEM STUDY
Disadvantages
Advantages
• Low initial capital investment
Notation Description
PK, MK system public key and master key
Ti public key component for attribute i
ti master key component for attribute i
SK user secret key
ski user secret key component for attribute i
Ei cipher-text component for attribute i
I attribute set assigned to a data file
DEK symmetric data encryption key of a data file
P user access structure
LP set of attributes attached to leaf nodes of P
AttD the dummy attribute
UL the system user list
AHLi attribute history list for attribute i
rki↔i’ proxy re-encryption key for attribute i from its current version to the updated
version i’
δO,X the data owner’s signature on message X
New File Creation Before uploading a file to Cloud Servers, the data owner
processes the data file as follows.
• select a unique ID for this data file;
• randomly select a symmetric data encryption key DEK R← K, where K is the key space,
and encrypt the data file using DEK;
• define a set of attribute I for the data file and encrypt DEK with I using KP-ABE, i.e.,
(Ẽ, {Ei}i∈I ) ← AEncrypt(I,DEK,PK).
Header Body
ID I, Ẽ, {Ei}i∈I {DataFile}DEK
Fig. 3: Format of a data file stored on the cloud
Finally, each data file is stored on the cloud in the format as is shown in Fig.3.
New User Grant When a new user wants to join the system, the data owner
assigns an access structure and the corresponding secret key to this user as follows.
• assign the new user a unique identity w and an access structure P;
• generate a secret key SK for w, i.e., SK ← AKeyGen(P,MK);
• encrypt the tuple (P, SK,PK, δO,(P,SK,PK)) with user w’s public key, denoting the cipher-
text by C;
• send the tuple (T,C, δO,(T,C)) to Cloud Servers, where T denotes the tuple (w, {j, skj} jLP
\AttD ). On receiving the tuple (T,C, δO,(T,C)), Cloud Servers processes as follows.
• verify δO,(T,C) and proceed if correct;
• store T in the system user list UL;
• forward C to the user.
On receiving C, the user first decrypts it with his private key. Then he verifies the
signature δO,(P,SK,PK). If correct, he accepts (P, SK,PK) as his access structure, secret key,
and the system public key.
As described above, Cloud Servers store all the secret key components of SK
except for the one corresponding to the dummy attribute AttD. Such a design allows
Cloud Servers to update these secret key components during user revocation as we will
describe soon. As there still exists one undisclosed secret key component (the one for
AttD), Cloud Servers cannot use these known ones to correctly decrypt ciphertexts.
Actually, these disclosed secret key components, if given to any unauthorized user, do
not give him any extra advantage in decryption as we will show in our security analysis.
User Revocation We start with the intuition of the user revocation operation as
follows. Whenever there is a user to be revoked, the data owner first determines a
minimal set of attributes without which the leaving user’s access structure will never be
satisfied. Next, he updates these attributes by redefining their corresponding system
master key components in MK. Public key components of all these updated attributes in
PK are redefined accordingly. Then, he updates user secret keys accordingly for all the
users except for the one to be revoked. Finally, DEKs of affected data files are re-
encrypted with the latest version of PK. The main issue with this intuitive scheme is that
it would introduce a heavy computation overhead for the data owner to re-encrypt data
files and might require the data owner to be always online to provide secret key update
service for users. To resolve this issue, we combine the technique of proxy re-encryption
with KP-ABE and delegate tasks of data file re-encryption and user secret key update to
Cloud Servers. More specifically, we divide the user revocation scheme into two stages
as is shown below.
// to revoke user v
// stage 1: attribute update.
The Data Owner Cloud Servers
1. D ← AMinimalSet(P), where P is v’s access structure; remove v from the system
user list UL;
2. for each attribute i in D for each attribute i ∈ D
Att
(t’i, T’i , rki↔i’ )← AUpdateAtt(i,MK); → store (i, T_i , δO,(i,T _i ));
3. send Att = (v,D, {i, T’i , δO,(i,T ‘i ), rki↔i’}i ∈ D). add rki↔i’ to i’s history list
AHLi.
// stage 2: data file and user secret key update.
Cloud Servers User (u)
1. on receiving REQ, proceed if u ∈ UL;
2. get the tuple (u, {j, skj}j∈LP \AttD); 1. generate data file access
request REQ;
for each attribute j ∈ LP \AttD ←REQ 2. wait for the response from
Cloud Servers;
sk’j← AUpdateSK(j, skj,AHLj );
for each requested file f in REQ 3. on receiving RESP, verify
for each attribute k ∈ If RESP
→ each δO,(j,T ‘j ) and sk’j; proceed
if
correct;
E’k← AUpdateAtt4File(k,Ek,AHLk); 4. replace each skj in SK with
sk’j;
3. send RESP = ({j, sk’j, T’j, δO,(j,T ‘j )}j∈LP \AttD, FL). 5. decrypt each file in FL
with
SK.
Description of the process of user revocation
In the first stage, the data owner determines the minimal set of attributes,
redefines MK and PK for involved attributes, and generates the corresponding PRE keys.
He then sends the user’s ID, the minimal attribute set, the PRE keys, the updated public
key components, along with his signatures on these components to Cloud Servers, and
can go off-line again. Cloud Servers, on receiving this message from the data owner,
remove the revoked user from the system user list UL, store the updated public key
components as well as the owner’s signatures on them, and record the PRE key of the
latest version in the attribute history list AHL for each updated attribute. AHL of each
attribute is a list used to record the version evolution history of this attribute as well as
the PRE keys used. Every attribute has its own AHL. With AHL, Cloud Servers are able
to compute a single PRE key that enables them to update the attribute from any historical
version to the latest version. This property allows Cloud Servers to update user secret
keys and data files in the “lazy” way as follows. Once a user revocation event occurs,
Cloud Servers just record information submitted by the data owner as is previously
discussed. If only there is a file data access request from a user, do Cloud Servers re-
encrypt the requested files and update the requesting user’s secret key. This statistically
saves a lot of computation overhead since Cloud Servers are able to “aggregate” multiple
update/re-encryption operations into one if there is no access request occurring across
multiple successive user revocation events.
File Access This is also the second stage of user revocation. In this operation,
Cloud Servers respond user request on data file access, and update user secret keys and
re-encrypt requested data files if necessary. As is depicted in Fig. 4, Cloud Servers first
verify if the requesting user is a valid system user in UL. If true, they update this user’s
secret key components to the latest version and re-encrypt the DEKs of requested data
files using the latest version of PK. Notably; Cloud Servers will not perform update/re-
encryption if secret key components/data files are already of the latest version. Finally,
Cloud Servers send updated secret key components as well as ciphertexts of the requested
data files to the user. On receiving the response from Cloud Servers, the user first verifies
if the claimed version of each attribute is really newer than the current version he knows.
For this purpose, he needs to verify the data owner’s signatures on the attribute
information (including the version information) and the corresponding public key
components, i.e., tuples of the form (j, T’j) in Fig. 4. If correct, the user further verifies if
each secret key component returned by Cloud Servers is correctly computed. He verifies
this by computing a bilinear pairing between sk’j and T’j and comparing the result with
that between the old skj and Tj that he possesses. If verification succeeds, he replaces each
skj of his secret key with sk’j and update Tj with T’j. Finally, he decrypts data files by first
calling ADecrypt(P, SK,E) to decrypt DEK’s and then decrypting data files using DEK’s.
File Deletion This operation can only be performed at the request of the data
owner. To delete a file, the data owner sends the file’s unique ID along with his signature
on this ID to Cloud Servers. If verification of the owner’s signature returns true, Cloud
Servers delete the data file. 2) Algorithm level operations: Algorithm level operations
include eight algorithms: ASetup, AEncrypt, AKeyGen, ADecrypt, AUpdateAtt,
AUpdateSK, AUpdateAtt4File, and AMinimalSet. As the first four algorithms are just the
same as Setup, Encryption, Key Generation, and Decryption of the standard KP-ABE
respectively, we focus on our implementation of the last four algorithms.
The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out. This is to ensure
that the proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential. Three key
considerations involved in the feasibility analysis are
♦ ECONOMICAL FEASIBILITY
♦ TECHNICAL FEASIBILITY
♦ SOCIAL FEASIBILITY
3. SYSTEM SPECIFICATIONS
4. SOFTWARE ENVIRONMENT
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and
Windows.NET Server, for instance) and services (like Passport, .NET My Services, and
so on).
The following features of the .NET framework are also worth description:
Managed Code
The code that targets .NET, and which contains certain extra information called
“metadata” - to describe itself. Whilst both managed and unmanaged code can run in the
runtime, only managed code contains the information that allows the CLR to guarantee,
for instance, safe execution and interoperability.
Managed Data
With Managed Code comes Managed Data. CLR provides memory allocation and
Deal location facilities, and garbage collection. Some .NET languages use Managed Data
by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++,
do not. Targeting CLR can, depending on the language you’re using, impose certain
constraints on the features available. As with managed and unmanaged code, one can
have both managed and unmanaged data in .NET applications - data that doesn’t get
garbage collected but instead is looked after by unmanaged code.
The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.
The class library is subdivided into a number of sets (or namespaces), each
providing distinct areas of functionality, with dependencies between the namespaces kept
to a minimum.
The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions of
Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C++), but
there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features
include inheritance, interfaces, and overloading, among others. Visual Basic also now
supports structured exception handling, custom attributes and also supports multi-
threading.
Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of
migrating existing C++ applications to the new .NET Framework.
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++
for Rapid Application Development”. Unlike other languages, its specification is just the
grammar of the language. It has no standard library of its own, and instead has been
designed with the intention of using the .NET libraries as its own.
Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the
Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl
Dev Kit.
• FORTRAN
• COBOL
• Eiffel
Constructors are used to initialize objects, whereas destructors are used to destroy
them. In other words, destructors are used to release the resources allocated to the object.
In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to
complete the tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the sub
finalize procedure can be called only from the class it belongs to or from derived classes.
Garbage Collection
In C#.NET, the garbage collector checks for the objects that are not currently in
use by applications. When the garbage collector comes across an object that is marked for
garbage collection, it releases the memory occupied by the object.
Overloading
Multithreading
The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the
term Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called Microsoft
SQL Server 2000 Meta Data Services. References to the component now use the term
Meta Data Services. The term repository is used only in reference to the repository
engine within Meta Data Services. SQL-SERVER database consist of objects specified
below.
1. TABLE
2. QUERY
3. FORM
4. REPORT
5. MACRO
TABLE
A database is a collection of data about a specific topic.
VIEWS OF TABLE
We can work with a table in two types,
1. Design View
2. Datasheet View
Design View
To build or modify the structure of a table we work in the table design view. We
can specify what kind of data will be hold.
Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that
answers the question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get
latest information in the dynaset. Access either displays the dynaset or snapshot for us to
view or perform an action on it, such as deleting or updating.
5. SYSTEM DESIGN
Methods for preparing input validations and steps to follow when error occur.
Objectives
1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process
and show the correct direction to the management for getting correct information from
the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be free
from errors. The data entry screen is designed in such a way that all the data manipulates
can be performed. It also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with the help
of screens. Appropriate messages are provided as when needed so that the user will not
be in maize of instant. Thus the objective of input design is to create an input layout that
is easy to follow.
1. Designing computer output should proceed in an organized, well thought out manner;
the right output must be developed while ensuring that each output element is designed so
that people will find the system can use easily and effectively. When analysis design
computer output, they should Identify the specific output that is needed to meet the
requirements.
3. Create document, report, or other formats that contain information produced by the
system.
The output form of an information system should accomplish one or more of the
following objectives.
Convey information about past activities, current status or projections of the
Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
Login
Owner User
Check
no
Open file Exists
yes
Create account
Upload files to cloud server
yes no
If files exists for data
End
Wrong
Enter secret key Check Displays duplicate data
Correct
Create account
Login
Upload files
User
Data owner
Search files
Cloud server
Maintain secret keys
Secure files
Class diagram:
createaccount()
generatekeys()
Search files
Fileid
Filename
Secure
Userdata
Userdata
downloadfiles() Userid
showduplicates() Encrypteddata
Decrypteddata
publickey
privatekey
encryption()
decryption()
Sequence diagram:
Cloud server
Create account
Search files
Activity diagram:
Login
Check
User
O wner
No
Upload files Exists Create account
Yes
Send data to
Maintain files and user details
cloud server
Check
6. MODULE DESCRIPTION
Setup Attributes:
This algorithm is used to set attributes for users. This is a randomized algorithm
that takes no input other than the implicit security parameter. It defines a bilinear group
G1 of prime order p with a generator g, a bilinear map e : G1 × G1 → G2 which has the
properties of bilinearity, computability, and non-degeneracy. From these attributes public
key and master key for each user can be determined. The attributes, public key and
master key are denoted as
Attributes- U = {1, 2. . . N}
Public key- PK = (Y, T1, T2, . . . , TN)
Master key- MK = (y, t1, t2, . . . , tN)
where Ti ∈ G1 and ti ∈ Zp are for attribute i, 1 ≤ i ≤ N, and Y ∈ G2 is another public key
component. We have Ti = gti and Y = e (g, g) y, y∈ Zp. While PK is publicly known to all
the parties in the system, MK is kept as a secret by the authority party.
Encryption:
This is a randomized algorithm that takes a message M, the public key PK, and a
set of attributes I as input. It outputs the cipher text E with the following format:
E = (I, Ẽ, {Ei ∈ I )
where Ẽ= MYs, Ei = Tis. and s is randomly chosen from Zp
Decryption:
This algorithm takes as input the cipher text E encrypted under the attribute set I,
the user’s secret key SK for access tree T, and the public key PK. It first computes e(Ei,
ski) = e(g, g)pi(0)s for leaf nodes. Then, it aggregates these pairing results in the bottom-up
manner using the polynomial interpolation technique. Finally, it may recover the blind
factor Ys=e(g,g)ys and output the message M if and only if I satisfies T.
Access tree T:
Let T be a tree representing an access structure. Each non-leaf node of the tree
represents a threshold gate, described by its children and a threshold value. If numx
is the number of children of a node x and kx is its threshold value, then 0 < kx ≤ numx.
When kx = 1, the threshold gate is an OR gate and when kx = numx, it is an AND gate.
Each leaf node x of the tree is described by an attribute and a threshold value kx = 1.
To facilitate working with the access trees, we define a few functions. We denote
the parent of the node x in the tree by parent(x). The function att(x) is defined only if x is
a leaf node and denotes the attribute associated with the leaf node x in the tree. The
access tree T also defines an ordering between the children of every node, that is, the
children of a node are numbered from 1 to num. The function index(x) returns such a
number associated with the node x, where the index values are uniquely assigned to
nodes in the access structure for a given key in an arbitrary manner.
Efficiency:
We now consider the efficiency of the scheme in terms of cipher text size, private
key size, and computation time for decryption and encryption. The cipher text overhead
will be approximately one group element in G1 for every element in I. That is the number
of group elements will be equal to the number of descriptive attributes in the cipher text.
Similarly, the encryption algorithm will need to perform one exponentiation for each
attribute in I.
The public parameters in the system will be of size linear in the number of
attributes defined in the system. User's private keys will consist of a group element for
every leaf in the key's corresponding access tree. The decryption procedure is by far the
hardest to define performance for. In our rudimentary decryption algorithm the number of
pairings to decrypt might always be as large as the number of nodes in the tree. However,
this method is extremely suboptimal and we now discuss methods to improve upon it.
One important idea is for the decryption algorithm to do some type of exploration
of the access tree relative to the cipher text attributes before it makes cryptographic
computations. At the very least the algorithm should first discover which nodes are not
satisfied and not bother performing cryptographic operations on them.
The number of group elements that compose a user's private key grows linearly
with the number of leaf nodes in the access tree. The number of group elements in a
cipher text grows linearly with the size of the set we are encrypting under. Finally, the
number of group elements in the public parameters grows linearly with the number of
attributes in the defined universe. Later, we provide a construction for large universes
where all elements in Z*p can be used as attributes, yet the size of public parameters only
grows linearly in a parameter n that we set to be the maximum possible size of I.
Divertible Protocols:
The basic observation was that some 2-party identification protocols could be
extended by placing an intermediary called a warden for historical reasons between the
prover and verifier so that, even if both parties conspire, they cannot distinguish talking
to each other through the warden from talking directly to a hypothetical honest verifier
and honest prover, respectively.
In order to deal with protocols of more than two parties, we generalize the notion
of Interactive Turing machine (ITM). Then we define connections of ITMs and finally
give the definition of protocol divertibility.
Here, on the other hand, we investigate the possibility of atomic proxy functions
that convert ciphertext for one key into ciphertext for another without revealing secret
decryption keys or cleartext messages .An atomic proxy function allows an untrusted
party to convert ciphertext between keys without access to either the original message or
to the secret component of the old key or the new key.
Transparent proxy keys reveal the original two public keys to a third party.
Translucent proxy keys allow a third party to verify a guess as to which two keys are
involved (given their public keys). Opaque proxy keys reveal nothing, even to an
adversary who correctly guesses the original public keys (but who does not know the
secret keys involved).
Plutus allows owners of files to revoke other people’s rights to access those files.
Following a revocation, we assume that it is acceptable for the revoked reader to read
unmodified or cached files. A revoked reader, however, must not be able to read updated
files, nor may a revoked writer be able to modify the files. Settling for lazy revocation
trades re-encryption cost for a degree of security.
To make revocation less expensive, one can delay re-encryption until a file is
updated. This notion of lazy revocation was first proposed in Cepheus. The idea is that
there is no significant loss in security if revoked readers can still read unchanged files.
This is equivalent to the access the user had during the time that they were authorized
(when they could have copied the data onto floppy disks, for example). Expensive re-
encryption occurs only when new data is created. The meta-data still needs to be
immediately changed to prevent further writes by revoked writers.
A revoked reader who has access to the server will still have read access to the
files not changed since the user’s revocation, but will never be able to read data updated
since their revocation. Lazy revocation, however, is complicated when multiple files are
encrypted with the same key, as is the case when using filegroups. In this case, whenever
a file gets updated, it gets encrypted with a new key. This causes filegroups to get
fragmented (meaning a filegroup could have more than one key), which is undesirable.
The next section describes how we mitigate this problem; briefly, we show how readers
and writers can generate all the previous keys of a fragmented filegroup from the current
key.
7. SYSTEM TESTING
TYPES OF TESTS
Unit Testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge of
its construction and is invasive. Unit tests perform basic tests at component level and test
a specific business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
Integration Testing
Integration tests are designed to test integrated software components to determine
if they actually run as one program. Testing is event driven and is more concerned with
the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically
aimed at exposing the problems that arise from the combination of components.
Functional Testing
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system documentation,
and user manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures : interfacing systems or procedures must be invoked.
System Testing
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test. System
testing is based on process descriptions and flows, emphasizing pre-driven process links
and integration points.
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Test objectives
Features to be tested
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
8. CONCLUSION
REFERENCES
[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski, G. Lee, D.
A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “Above the clouds: A berkeley view
of cloud computing,” University of California, Berkeley, Tech. Rep. USB-EECS-2009-
28, Feb 2009.
[2] Amazon Web Services (AWS), online at http://aws.amazon.com.
[3] Google App Engine, Online at http://code.google.com/appengine/.
[4] Microsoft Azure, http://www.microsoft.com/azure/.
[5] 104th United States Congress, “Health Insurance Portability and Accountability Act
of 1996 (HIPPA),” Online at http://aspe.hhs.gov/admnsimp/pl104191.htm, 1996.
[6] H. Harney, A. Colgrove, and P. D. McDaniel, “Principles of policy in secure groups,”
in Proc. of NDSS’01, 2001.
[7] P. D. McDaniel and A. Prakash, “Methods and limitations of security policy
reconciliation,” in Proc. of SP’02, 2002.
[8] T. Yu and M. Winslett, “A unified scheme for resource protection in automated trust
negotiation,” in Proc. of SP’03, 2003.
[9] J. Li, N. Li, and W. H. Winsborough, “Automated trust negotiation using
cryptographic credentials,” in Proc. of CCS’05, 2005.
[10] J. Anderson, “Computer Security Technology Planning Study,” Air Force Electronic
Systems Division, Report ESD-TR-73-51, 1972,
http://seclab.cs.ucdavis.edu/projects/history/.
[11] M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang, and K. Fu, “Scalable secure
file sharing on untrusted storage,” in Proc. of FAST’03, 2003.
[12] E. Goh, H. Shacham, N. Modadugu, and D. Boneh, “Sirius: Securing remote
untrusted storage,” in Proc. of NDSS’03, 2003.
[13] G. Ateniese, K. Fu, M. Green, and S. Hohenberger, “Improved proxy re-encryption
schemes with applications to secure distributed storage,” in Proc. of NDSS’05, 2005.
[14] S. D. C. di Vimercati, S. Foresti, S. Jajodia, S. Paraboschi, and P. Samarati, “Over-
encryption: Management of access control evolution on outsourced data,” in Proc. of
VLDB’07, 2007.
[15] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-based encryption for fine-
grained access control of encrypted data,” in Proc. Of CCS’06, 2006.
[16] M. Blaze, G. Bleumer, and M. Strauss, “Divertible protocols and atomic proxy
cryptography,” in Proc. of EUROCRYPT ’98, 1998.
[17] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, “Enabling public verifiability and
data dynamics for storage security in cloud computing,” in Proc. of ESORICS ’09, 2009.
[18] L. Youseff, M. Butrico, and D. D. Silva, “Toward a unified ontology of cloud
computing,” in Proc. of GCE’08, 2008.
[19] S. Yu, K. Ren, W. Lou, and J. Li, “Defending against key abuse attacks in kp-abe
enabled broadcast systems,” in Proc. of SECURECOMM’09, 2009.
[20] D. Sheridan, “The optimality of a fast CNF conversion and its use with SAT,” in
Proc. of SAT’04, 2004.
[21] D. Naor, M. Naor, and J. B. Lotspiech, “Revocation and tracing schemes for
stateless receivers,” in Proc. of CRYPTO’01, 2001.
[22] M. Atallah, K. Frikken, and M. Blanton, “Dynamic and efficient key management
for access hierarchies,” in Proc. of CCS’05, 2005.
[23] Shucheng Yu, Cong Wang, Kui Ren, and Wenjing Lou, “Achieving Secure,
Scalable, and Fine-grained Data Access Control in Cloud Computing,” in Proc. of
INFOCOM’10, 2010.
Screen Shots:
The page that asks for mobile number to provide Secret Key to the User
Search results
Download Page
Search Results (The last file is uploaded in the previous section by the owner)
No download is occurred