You are on page 1of 6

Vol No. 1 Issue No.

1 International Journal of Interdisciplinary Engineering (IJIE) ISSN: 2456-5687

IMPROVING SCALABILITY OF PRISM USING DISTRIBUTED


CLOUD BASED RESOURSE-AWARE SCHEDULERS
Mr.P.Sreekanth Assoc.Professor2
Paluri Sirisha Sumallika1 2
1 Department of Computer Science & Engineering
Department of Computer Science & Engineering
Godavari Institute of Engineering and Technology
Godavari Institute of Engineering and Technology
Rajahmundry,A.P.,India
Rajahmundry,A.P.,India
e-mail: srikanthpuli66@gmail.com
e-mail: koolsirisha@gmail.com

ABSTRACT

Information Technology is vast and stretched as the endless ocean. For Instance if we use the Virtual
implementation of IT in the cloud then it will enable us in optimization of resource, economy and many more
compare to the classical technology. In this paper, we exercised on the cost reduction system which we call as pay
per use and network. In a methodology where we can do balancing of data load, making the environment dynamic,
number of servers and optimizing the traffic of the network.the part played by us will be significant in enhancing of
technology which is named as the Technology of change. We can cache the first step in order to reduce the first step,
making the virtual node by using the map reduce concept slicing the work into multiple parts in order to do the
parallel processing of scheduling the tracker to provide high end data accuracy, optimization, through put time and
efficient as per the strategic based data, which depends on the pinnacle of the data load for stabilizing the computing
environment. Technologically its meaning remains virtual, but actually during encounter to any real world of
enterprise solution. As we see the structure of the high end scheduler of the reducer, which give rise to the structure
of the prism , in other terms its flexibility increases to debug mode and gives the developer to provide a high end
optimized scheduler.

Keywords Cloud computing, cost efficiency, Map Reduce, Hadoop, scheduling, resource, authorization policies,
and consistency.

I.INTRODUCTION
Software as a service (SaaS), is the largest universally known among internet users. In this model, the end-
user is utilizing straightly the software placed by the cloud provider, which could be an online e-mail
service, for instance. This previous model differs somewhat to the previous as the cloud users are the
customers of the application, in comparison to the previous both models where the cloud users are often
businesses. Cloud and data storage service which provides both non vulnerable data outsourcing overhaul
and efficient data retrieval and restore service, summarizing four different groups: the data owner, the data
user, the cloud server, and the third party server. The data keeper expands the cryptographed chunk of the
file M to n cloud servers denoted as storage servers. If the data keeper requires keeping the data content
enclosed, the file can be first encrypted before encoding. Outsourced data are connected by some metadata
to provide integrity check capability. After the data outsourcing, a user can pick any k storage servers to
recover encoded segments, and recover the file M, which can be further decrypted if the file is encrypted. In
the meantime, the third party server periodically checks the integrity of data reserved in cloud servers.
Deteriorated cloud servers can be mended with the help of other healthy cloud servers

September 2016 Inside Journal (www.insidejournal.org) Page | 116


Vol No. 1 Issue No. 1 International Journal of Interdisciplinary Engineering (IJIE) ISSN: 2456-5687

Figure 1: Illustration of the Cloud Cost Model


Security as destructive software can invade the enterprise network downloaded by the employees browsing
the internet. Certainly, destructive occupants could be willing to explore data of other tenants or to gain
access to their network by using methods like ARP cache poisoning or IP spoofing. In addition, a
configuration of routing appliances could lead to a breach in confidentiality. In order to run a large-scale
service like YouTube, several data centers around the world are required. Managing a service becomes
particularly demanding and expensive if the service is successful. In requisite to control these issues, utility
computing (a.k.a., cloud computing) has been proposed as a new way to operate services on the internet.
Some abilities tradeoff between the regularity and response times of a write request. The authors in develop
replica for transactional databases with ultimate consistency, in which an updated data item becomes
consistent eventually.

II.RELATED WORK
With respect to data confidentiality, all the providers assure that they provide and support encryption. Many
cloud providers ease their customers to cryptograph data before transmitting it to the cloud. For instance,
Amazon S3 makes it optional to users to cryptograph data for enhanced security, but not recommended in
the case of third party or external auditors. Cloud front is an Amazon web functions which allows data
confidentiality while being transferred. Attempts to apply functional change in the LT codes depending
distributed storage should first solve how to recode packets, because the random linear recoding in the
working demodulation of network coding-based storage codes cannot satisfy the degree distribution in LT
codes. Each and every application deployed on a cloud platform should be able to take advantage of this
feature. Decisions for leveling-up or leveling-down have an important impact on performance and resource
usage, because there is an overhead attached with the auto-scaling system. It is very important to distinguish
between the actual change in the workload and an anomaly.

September 2016 Inside Journal (www.insidejournal.org) Page | 117


Vol No. 1 Issue No. 1 International Journal of Interdisciplinary Engineering (IJIE) ISSN: 2456-5687

Figure 2: Cloud Balance In The Traffic


A tenant may be harmed by the cloud provider or by some foreign clients. The cloud distributer can be
attacked by tenants. Our project not focuses on the situation when the provider is offender. Indeed, we
assume that the provider is truthful. We further assume that all network appliances are acquiescent and
protected. It looks like that this crisis can be solved by utilizing the newly proposed LT network codes
(LTNC) which gives competent decoding at the cost of somewhat more communication in the single-source
propagation circumstances. on the other hand, subsequent to more than a few rounds of repair with same
recoding operations regulated in LT network codes, data users practice decoding breakdown with high
chance. Our project aims to protect the traffic of a tenant. In this situation, this traffic could first be
threatened by other tenants.

III. METHODOLOGY
To begin with, the dedicated hardware, even though it may be inactive at a given time, has to be
monopolized for the residents who require the peak protection. we can also say, the pooling of resources
would be limited which may lead to a huge network. certainly the devoted servers might have hosted other
tenants' Virtual Machines. However, these VMs would have to be situated on additional servers,
contributing to the waste of resources, as the servers would not be loaded at their maximal ability.
additionally, as the use of the cloud would grow, we can easily imagine that the expansion of such a firm
architecture would turn out to be harder and harder to maintain and configure. The Proof Generation process
is run by the storage space supplier in order to produce evidence transcript which could be a set of
corresponding tags or, in many situations, an aggregation of the worried tags and aggregation of data blocks.
A proof transcript allows the verifier to obtain the proof in order to check the truthfulness of the challenged
blocks. Upon receiving the reply from the storage source, the verifier executes the Proof Verification
protocol to verify the validity

September 2016 Inside Journal (www.insidejournal.org) Page | 118


Vol No. 1 Issue No. 1 International Journal of Interdisciplinary Engineering (IJIE) ISSN: 2456-5687

Figure 3.1 Batch Based DFS Architect of High End Security Model.
As soon as enterprises come to a decision to shift to the cloud, they desire to keep the same requirements
regarding their policy management. It would be feasible for the system administrator of the enterprise to
implement the middle boxes and the routing policies on VMs in the equivalent way as earlier than moving
to the cloud. However, one of the goals of moving to the cloud is to break out the burden of network
administration and configuration. Cloud computing shares the basic subject of previous paradigms are
connected with the provision to compute communications On the other hand, cloud computing differs in that
it shifts the location of the communications to the network to provide basic machinery. This basic
machinery, such as storage, CPUs, and network bandwidth, are made available as a service by dedicated
service providers at a low unit cost. Users of these services are assured not to be concerned regarding
scalability and backup because the available resources are virtually infinite and failed machinery are
replicated without any service disturbance and data loss. Transactional data management is the heart of the
database industry. A transaction is a logical unit of works that consists of a series of read and/or writes
operations to the database. Furthermore, the variety of security policies in place in enterprises networks is
often quite identical as it comprise in the traversal of several middle boxes. These data centers are located
in Northern Virginia (USA) and the companys cluster of cloud computing services in Virginia were
currently experiencing degraded performance as mentioned on the Amazon website. Considering both
data repair and data retrieval, and design a LT codes-based cloud storage service (LTCS). Multi-keyword
ranked exploration over encrypted cloud data, and establish a variety of privacy requirements. Between a
variety of multi-keyword semantics, we select the capable resemblance measure of coordinate matching,
i.e., as various matches as likely, to successfully captured the relevance of outsourced documents to the
query keywords, and used inner product similarity to quantitatively analyze such similarity measure.

September 2016 Inside Journal (www.insidejournal.org) Page | 119


Vol No. 1 Issue No. 1 International Journal of Interdisciplinary Engineering (IJIE) ISSN: 2456-5687

3.1 ANALYSIS AND INTERPRETATION


These days data centers are filled with hundreds of thousands of servers, and this number is very likely to
increase throughout in the upcoming years. In addition, the customers can request the creation or the
removal of VMs in order to meet their demand and a correlated happening happened about four months
earlier due to an electrical storm that caused some disturbance to the same data centers.

Figure 3.1.1 Comparison of Dataset and Security feature with Size and Cost
Thus, cloud clients are passionate to be acceptable to store their data in the cloud and at parallely they would
like, to be able to confirm by themselves that their data is protected.

IV.CONCLUSION
In the modeling feature of the cloud computing as pay per use which makes us to in making the
system everything global and making the smaller level organization as base to as use. It is distinguished
with the traditional multi-domain approaches by the characteristics of the emerging cloud environments. We
use a top-down methodology in the research referring the PEI stack. Secure and reliable cloud storage with
the efficiency.

REFERENCES
[1] Thomas S. J. Schwarz and Ethan L. Miller. Store, forget, and check: Using algebraic signatures to check
remotely administered storage. In ICDCS '06: Proceedings of the 26th IEEE International Conference on
Distributed Computing Systems, Washington, DC, USA, 2006.
[2] F. Sebe, J. Domingo-Ferrer, A. Martinez-Balleste, Y. Deswarte, and J.-J. Quisquater, Efficient Remote Data
Possession Checking in Critical Information Infrastructures, IEEE Trans. Knowledge Data Eng., vol. 20, no. 8,
pp. 1034-1038, Aug. 2008.
[3] G. Ateniese, R.D. Pietro, L.V. Mancini, and G. Tsudik, Scalable and Efficient Provable Data Possession, Proc.
Fourth Intl Conf. Security Privacy Comm. Networks, pp. 1-10, 2008.
[4] C. Erway, A. Ku pcu, C. Papamanthou, and R. Tamassia, Dynamic Provable Data Possession, Proc. 16th
ACM Conf. Computer Comm. Security, pp. 213-222, 2009.
[5] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, Enabling Public Verifiability and Data Dynamics for Storage
Security in Cloud Computing, Proc. 14th European Conf. Research Computer Security, pp. 355-370, 2009.

September 2016 Inside Journal (www.insidejournal.org) Page | 120


Vol No. 1 Issue No. 1 International Journal of Interdisciplinary Engineering (IJIE) ISSN: 2456-5687

[6] A.F. Barsoum and M.A. Hasan, Provable Possession and Replication of Data over Cloud Servers, Technical
Report 2010/ 32, Centre for Applied Cryptographic Research, http://
www.cacr.math.uwaterloo.ca/techreports/2010/cacr2010-32.pdf. 2010.
[7] ] R. Curtmola, O. Khan, R. Burns, and G. Ateniese, MR-PDP: Multiple-Replica Provable Data Possession,
Proc. 28th Intl Conf. Distributed Computing Systems, pp. 411-420, 2008.
[8] A.F. Barsoum and M.A. Hasan, On Verifying Dynamic Multiple Data Copies over Cloud Servers, Technical
Report 2011/447, Cryptology Eprint Archive, http://eprint.iacr.org/, 2011
[9] K.D. Bowers, A. Juels, and A. Oprea, HAIL: A High-Availability and Integrity Layer for Cloud Storage,
Proc. 16th ACM Conf. Computer Comm. Security, pp. 187-198, 2009.
[10] Y. Dodis, S. Vadhan, and D. Wichs, Proofs of Retrievability via Hardness Amplification, Proc. Sixth
Theory Cryptography Conf. Theory Cryptography, 2009.
[11] A. Juels and B.S. Kaliski, PORs: Proofs of Retrievability for Large Files, Proc. 14th ACM Conf. Computer
Comm. Security, pp. 584-597, 2007.
[12] H. Shacham and B. Waters, Compact Proofs of Retrievability, Proc. 14th Intl Conf. Theory Appl.
Cryptology Information Security, pp. 90-107, 2008.
[13] M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang, and K. Fu, Plutus: Scalable Secure File Sharing on
Untrusted Storage, Proc. Second USENIX Conf. File Storage Technologies, 2003.
[14] E.-J. Goh, H. Shacham, N. Modadugu, and D. Boneh, SiRiUS: Securing Remote entrusted Storage, Proc.
Network Distributed System Security Symp., 2003.
[15] G. Ateniese, K. Fu, M. Green, and S. Hohenberger, Improved Proxy Re-Encryption Schemes with
Applications to Secure Distributed Storage, Proc. Network Distributed System Security ymp., 2005.
[16] S.D.C. di Vimercati, S. Foresti, S. Jajodia, S. Paraboschi, and P. Samarati, Over-Encryption: Management of
Access Control Evolution on Outsourced Data, Proc. 33rd Intl Conf. Very Large Data Bases, pp. 123-134,
2007.
[17] V. Goyal, O. Pandey, A. Sahai, and B. Waters, Attribute-Based Encryption for Fine-Grained Access Control
of Encrypted Data, Proc. 13th ACM Conf. Computer Comm. Security (CCS06)pp. 89-98, 2006.

September 2016 Inside Journal (www.insidejournal.org) Page | 121

You might also like