You are on page 1of 9

International

Journal of Computer
Engineering
Technology (IJCET),
ISSN 0976-6367(Print),
INTERNATIONAL
JOURNAL
OFand
COMPUTER
ENGINEERING
&
ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

TECHNOLOGY (IJCET)

ISSN 0976 6367(Print)


ISSN 0976 6375(Online)
Volume 6, Issue 3, March (2015), pp. 24-32
IAEME: www.iaeme.com/IJCET.asp
Journal Impact Factor (2015): 8.9958 (Calculated by GISI)
www.jifactor.com

IJCET
IAEME

ENHANCEMENT OF CLOUD SECURITY THROUGH


SCHEDULED HIDING OF DATA
Aslam B Nandyal1,

Rupesh Mishra2,

Shripad Thombare3

Departement of Computer Engineering SFIT/University of Mumbai, India


Department of Computer Engineering SFIT/ University of Mumbai, India
3
Department of Information Technology SFIT/ University of Mumbai, India
2

ABSTRACT
People now a days have adapted themselves towards cloud computing and its unparallel
advantages for making their life, much simpler and easier one. Often, cloud users are asked to post
their valuable and private information to the cloud, such as credit/debit card numbers, passwords,
PINs, account numbers etc. These data are stored in various locations during their computations and
cached as well without the users intervention and concern. Hence there is a need to provide security
to the data stored in the cloud.
The proposed system mainly aims at restricting the access to the cloud data in a systematic
manner. Whenever users are entering their private information to the cloud they can specify the timeto-live value. After the specified time-to-live value the data becomes inaccessible. On the other hand
if the user still wants to access, he should request the admin for the access grant. On approval by the
admin the user will receive the key from the system through which the data can be accessed. Data
encryption techniques and Shamir secret sharing algorithm enhances the overall security of the
proposed system, as the private keys itself are not stored, only the shares generated are stored.
Keywords: Cached Copies, Cloud Computing, Cloud Security, Scheduled Hiding.
1. INTRODUCTION
Security issue has played the most important role in hindering cloud computing acceptance.
Without doubt, putting your data, running your software on someone elses hard disk using someone
elses CPU appears daunting to many. Well known security issues such as data loss, phishing pose
serious threat to organizations data and software.

24

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

As people depend more and more on the internet and cloud technology, security of their
privacy takes more and more risks. Data on the cloud needs to be processed, transformed and stored
by the current computer system or network. Several copies of the data are produced and stored in the
network of systems. People are unaware of all these processes and possess no knowledge about the
copies of data and hence cannot control them. There is a serious threat that the copies of data may
get leaked.
Data on the cloud may also get leaked by the negligence of the service providers of the cloud.
Such problems of security present challenges to protect privacy of the people. This privacy can be
leaked via the Cloud service Providers (CSPs) negligence, hackers intrusion or some legal actions.
These problems present formidable challenges to protect peoples privacy. This system presents a
solution to implement scheduled hiding of data. This system defines two new modules, a self
destruct method object that is associated with each secret key part and a survival time parameter for
each secret key part. This system can meet the requirements of self-destructing data with controllable
survival time while users can use this system as a general object storage system.
Scheduled hiding of data mainly aims at protecting the privacy of the data of the user. All the
data corresponding to the respective users become untraceable after a user-specified time, without
any user intervention. However if the respective users still wants the data back after the user
specified time, the user can send a request, which is to be approved by the admin. After which the
user can gain access to the data. This system meets the challenge of data privacy through a novel
integration of cryptographic techniques with active storage techniques based on T10 OSD standard.
Through functionality and security properties evaluations of the system prototype, the results
demonstrate this system is practical to use and meet all the privacy preserving goals described.
Previous works on the cloud security, such as system named vanish, in this they have
implemented the self destructing data which can be used in various scenarios one such example is of
email.[1] In vanish, suppose if you are sending an email to someone, once the content has been read
by the recipient it is no longer of use. To elaborate, suppose if a person named Ramu wants to
discuss some sensitive topic with his friend Somu, over an email. Once Somu has read the content of
the email, it need not be residing in his email client; more specifically it need not be in readable
format.
That is, email should be self destructive after the user specified time. Let us say Ramu had
sent an email mentioning it should be in the readable format only till 10 am tomorrow. In fact, Ramu
would prefer that these emails disappear early and not be read by his friend rather than risk
disclosure to unintended parties. After that particular time period the email will become corrupt or
unreadable, and the recipient will not be able to read that email after that mentioned time.
Safevanish, an improvement of the previous version vanish. [2] The previous system vanish
was vulnerable to the hopping attacks and the sniffing attacks. SafeVanish proposes a new scheme to
prevent hopping attacks by way of extending the length range of the key shares to increase the attack
cost substantially, and do some improvement on the Shamir secret sharing algorithm implemented in
the original Vanish system.
In both systems mentioned above after the user stipulated time, the data loss was inevitable.
That is after the user specified time the data once lost cannot be traced back. There was no chance of
getting the data back from the system after self destruction. To overcome this drawback this system
of scheduled hiding of data is introduced where in the data can be recovered back after the user
specified time as well. Let us say a person wants to store his/her data for a specified period of time
after which the data should be inaccessible. For example some Organization wants to keep some data
in the cloud for their business partners for a particular time period and after that time period it should
not be accessible. And still after that time any of the business partners still requires access to that
particular data, they can send the requests to the organization. After the approval from the
organization the respective Business partners can access that data from the one time password
25

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

provided from the system. Thus through the scheduled hiding of the data system the vulnerability of
the data loss can be restricted.
1.1 Methodologies
In this system Shamir secret sharing scheme has been used to store the respective private
keys for each file stored. The Shamir secret Sharing Scheme has been explained in brief below.
Shamir's Secret Sharing is an algorithm in cryptography created by Adi Shamir. It is a form of secret
sharing, where a secret is divided into parts, giving each participant its own unique part, where some
of the parts or all of them are needed in order to reconstruct the secret. Here the main motive is to
hide a secret may be any information or data. In order to do this securely, in Shamir secret sharing
scheme, a secret is divided into N number of shares. These shares will not resemble to the actual data
in any manner it will be totally different, we actually use the term shadows for the secret shares
generated.
The original idea for recovering a secret is to divide it into several pieces so that if later on,
you have some of these pieces, not all of them necessarily, it is possible to recover the secret that had
been hidden. eg. Consider an example of writing a secret on a piece of paper and cutting it into
several pieces, later distributing these pieces. In order to reconstruct it we need all the pieces
previously generated. But in such case even if we lose one piece of paper then you will not be able to
regenerate the hidden data. The actual scenario is much more complicated.
The aim is to obtain different values from the secret, so that these values dont give clues
about the content of the secret. Working of the Shamir secret sharing scheme is given below. N
shadows/shares are generated from the secret data that is to be hidden. We need a minimum of K
shadows/shares out of N to regenerate the hidden data. This adds to the overall security to the
system, where only the shares generated from the keys are stored in the system and keys will not be
stored corresponding to each file respectively.
Apart from using the Shamir secret sharing algorithm, the limitation of the various systems
that were working on the self destructive data such as Vanish or SafeVanish, was that the data loss
was inevitable. That is after the user specified time the data pertaining to the respective user would
be destructed forever, that the user cannot trace the data back. Using the Scheduled hiding of data
system, this limitation came be overcome.
In the scheduled hiding of Data system the contents of the file will be in the encrypted
manner using the AES algorithm, Also the keys corresponding to the files will not be stored rather
only the shares generated will be stored. Here the user can get the data back even after the expiration
period or the time-to-live value. This system helps in gaining the customers faith in cloud service
providers by giving them the data security without fearing the data loss.
2. CLOUD COMPUTING
Cloud computing is a technology that provides an on request network access to a pool of
computing resources that are shared, and that can be managed with minimum efforts. Cloud is a
distributed and parallel computing system that consists of computers that are inter connected and
virtualized, which can be made available dynamically on demand and presented as one more
computing resources based on the agreement made between the consumers and the providers. It is a
form of computing where the user accepts the IT related capabilities as services rather than a product
on the internet. The main purpose of the cloud computing technology is to provide cost effective
service on request of computing infrastructure with good quality of service levels. Most of the
application developers on cloud struggle to imbibe security. In other cases, developers are incapable
of providing real security with currently available technological capabilities.

26

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

The architecture of cloud computing involves multiple cloud components interacting with
each other about the various data they are holding on to, thus helping the user to get to the required
data at a faster rate. Cloud is more focused on the concepts of frontend and backend. The front end is
the consumer who requests for the data, whereas the backend is the numerous data storage device,
server which makes the cloud.
2.1 Understanding Cloud Computing
Users connect the cloud as a single application, a device or a document. All the things inside
the cloud system such as the hardware in the cloud and the operating system that manages the
hardware connections are invisible. Cloud computing starts with the user interface seen by the
individual users. This is how the users give their request which then gets passed to the system
management, which finds the appropriate resources and then calls the systems appropriate
provisioning services.
Data storage is mainly achieved by cloud computing. Data is stored on multiple servers
which are maintained by the third party. The user finds a virtual server, it appears as if the data is
stored in a particular place with a specific name on a particular server, but in reality this does not
happen. Its just used to reference the virtual space of the cloud. In reality, the users data could be
stored on any one or more of the computers used to create the cloud.
2.2

Cloud Computing Services Models


The cloud service providers provide three different services based on different capabilities
such as SaaS(Software as a Service), PaaS(Platform as Service), IaaS(Infrastructure as a service).
Software as a Service (SaaS): when a user demands for a service through the thin client such as a
browser over the internet, then the software running on the providers cloud infrastructure is the
provided to the client. Examples are Google Docs and salesforce.com. Platform as a Service (PaaS):
The user can create and develop applications on the providers platform. The provider gives the
client a completely virtualized platform that consists of one or more servers, operating systems and
specific applications. The main services that are provided by the providers are storage, database and
scalability. Examples are Google App Engine, Mosso. Infrastructure as a Service (IaaS): The user or
client gets access to the networking, storage and servers through a service API. IaaS provides the
virtual infrastructure to the client, when the client pays on a per use basis. Examples are Flexiscale,
AWS: Ec2 (Amazon web services).
2.3 Cloud computing Deployment models
The security issues start with cloud deployment models. Depending on the infrastructure
ownership, there are four deployment models of cloud computing as described below. The public
cloud describes cloud computing in the traditional mainstream sense. Resources are dynamically
made available on a self service basis over the Internet. It is usually owned and governed by a large
organization. Examples include Amazon, Googles App Engine and Microsoft Azure. This is the
most cost effective model providing the user with privacy and security issues since the physical
location of the providers infrastructure usually traverses numerous national boundaries.
The private cloud is different from the traditional mainstream due to the use of the
virtualization. It creates a single tenant environment. They have been criticized on the basis that the
users or clients have to buy, build and manage them as such do not benefit from lower capital costs
and less hands on management. This cloud is more suitable for the enterprises especially in mission
and safety critical organizations.
The community cloud is shared by various organizations among the specific community .the
clouds may be managed by any one of the organizations or a third party. A typical example is the
Open Cirrus Cloud Computing Testbed, which is a collection of Federated data centers across six
27

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

sites spanning from North America to Asia. The Hybrid cloud consists of any two of the three
models mentioned above. Standardizations of APIs have lead to easier distribution of applications
cross different cloud models.
3. CLOUD COMPUTING SECURITY ISSUE
3.1 Layered Framework for cloud Security
It consists of Virtual Machine layer, Cloud Storage layer, Cloud Data Layer and Virtual
Network Monitor Layer. The Cloud Storage Layer has a storage infrastructure which integrates
resources from multiple cloud service providers to build a huge virtual storage system. The Virtual
Network Monitor Layer combines both the hardware and software solutions in virtual machines to
handle problems.
However, there are many groups that are working and interested in developing standards and
security for clouds. The Cloud Standards web sites are collecting and coordinating information about
cloudrelated standards under development by other groups. The Cloud Security Alliance (CSA) is
one of them. The CSA gathers solution providers, non profits and individuals to enter into discussion
about the current and future best practices for information assurance in the cloud. Another group is
Open web Application Security Project (OWASP). OWASP maintains a list of vulnerabilities to
cloud based or software as a Service deployment model which is updated as the threat landscape
changes. The Open Grid Forum publishes documents that contain security and infrastructural
specification and information for grid computing developers and researchers.
3.2 Components Affecting Cloud Security
There are various security issues for cloud computing as it consists of many technologies
such as transaction management, resource allocation, operating systems, virtualization, cloud
networks, load balancing, databases, memory management and concurrency control. For Instance,
security in a cloud network should be secure which interconnects the systems in a cloud. Load
balancing algorithms has to be executed very securely.
Virtualizations in the cloud computing is also a security concerns. For example, mapping the
virtual machines with the physical machines has to be carried out securely. Resource allocation and
memory management algorithms need to be secure. Concurrency control involves encrypting the
data as well as ensuring that appropriate policies are enforced for data sharing. Data mining
techniques may be applicable to malware detection in clouds.
3.3 Security Issues Faced By Cloud Computing
There are many security issues related to the cloud. The service provider of the cloud makes
sure that the consumer does not face any problem such as data theft or loss of data. The cloud
computing infrastructure uses latest technologies and services which are not fully evaluated with
respect to security. Hence there is a lot of possibility that a user can penetrate the cloud by
impersonating a legitimate user, thereby the entire cloud gets infected. Hence all the users that are
using the cloud are affected. The security issues faced by the cloud computing are discussed as
below
3.3.1 Data Access Control
Sometimes data that is confidential can be accessed illegally due to lack of the secured data
access control. Protection of the secured data in cloud environment has become a major issue in a
cloud based system. The longer the data exists in the cloud, the greater is the risk.

28

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

3.3.2 Data Integrity


Sometimes the data may contain error while it is entered into the system by means of human
error. Errors may even take place when the data is transmitted. Even malfunctions such as a disk
crash can cause errors. Viruses in the cloud also can create errors. Hence a Data Integrity method is
required in cloud computing so that data is kept intact.
3.3.3 Data Loss
Data Loss is a very serious problem in cloud computing. Sometime the research and
development teams may be sharing their important data online meanwhile if some intruder accesses
the shared information illegally then there data is vulnerable. If everything is secure, if a sever goes
down or crashes or is attacked by virus, the whole system would go down and data loss may occur. If
the vendor closes due to legal or financial problems then there will be a loss of data for the
consumers. The consumers wont be able to access the data because the data is no more available for
the consumer as the vendor has shut down.
3.3.4 Data Theft
External servers are used in cloud computing for the storage of data. The data that is stored in
these servers is vulnerable and there is a chance that it can be stolen.
3.3.5 Data Location
Consumers do not always know about the location of the data stored in the cloud. The vendor
does not reveal where all the data is stored. High degree of data mobility is provided in the cloud
computing. The data is stored in different countries. The data can be in particular location as
requested by the consumer but it requires a contractual agreement between the cloud service provider
and the consumer.
3.3.6 Security issues in provider level
A cloud is secure only when there is a good security provided by the vendor to the customers.
Provider should make a good security layer for the consumer and the user. And make sure that the
server is well secured from all the external threats it may come across.
3.3.7 Privacy Issues
Security of the customers personal information is very important in case of cloud computing.
Since most of the servers are external, the vendor should make sure that it is well secured from the
operators.
3.3.8 Infected Application
Service provider should have the full access to the server with all the rights for the purpose of
maintenance and monitoring of server. So this will prevent any malicious user from uploading any
infected application onto the cloud which will severely affect the customer and cloud computing
services.
3.3.9 User level Issues
User should take care that the data is not lost or tampered while entering the data into the
system.
4. DESIGN AND IMPLEMENTATION
The architecture of the Scheduled hiding of data system, it mainly consists of the following
components.
29

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

4.1. Metadata Server (MDS)


MDS is responsible for user management, server management, session management and file
metadata management.
4.2. Application Node
The application node is a client to use storage service for the scheduled hiding of data.
4.3. Storage Node
Each storage node is an OSD. It contains two core subsystems :{key,value} store subsystem
and active storage object(ASO) runtime subsystem. The {key,value}store subsystem that is based on
the object storage component is used for managing objects stored in storage nodes: lookup object,
read/write object and so on. The object ID is used as key. The associated data and attributes are
stored as values. The ASO runtime subsystem based on the active storage agent module n the object
based storage system is used to process active storage request from users and manage method objects
and policy objects.
An Active storage object derives from a user object and has time to live (ttl) value property
[3] [4]. The ttl value is used to trigger the self destruct operation. The ttl value of a user object is
infinite so that a user object will not be deleted until a user deletes it manually. The ttl value of an
active storage object is limited so an active object will be deleted when the value of the associated
policy object is true.

Fig 4.1 Scheduled data hiding system Design


4.4 Data process
In the data process the users can upload the file to scheduled hiding system or they can
download previously uploaded file. These two logics are explained below
4.4.1

Uploading file process


To upload a file user should specify the file, Key and ttl (Time to live) parameter. When the
user uploads the file, the key will not be stored in the system. But the shadow parts generated from
the actual key will be stored. This adds to the overall security of the system as the key itself will not
30

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

be stored only the shares generated from the Shamir Secret Sharing algorithm will be stored. Encrypt
File procedure is used to encrypt the content of the file which has been uploaded. AES Encryption
algorithm to encrypt the content of the file uploaded is been deployed.
4.4.2

Downloading File process


While downloading the file, respective users must furnish file which they want to download
along with corresponding key to that file. Once the user submits the key, then in the downloadFile
procedure, key is generated from the key shares which are stored in the system. Then this key is
matched with the entered key specified by the user. If these two keys match then the user will be
given access to the file that is the download process begins. If there is a mismatch in the two keys
then the download process will not initiated. Moreover before downloading we need to check the
status of the file. That is once if the time-to-live value time has elapsed then the status of the file will
be changed to inactive. In such cases even if the keys match the access to file will not be provided as
the time-to-live value is elapsed. Once if the keys match and the files status is still active then the
content of the file will be decrypted.
5. CONCLUSION
More and more people are moving towards cloud computing. Data privacy has become
increasingly important in the Cloud environment. Scheduled hiding of data system is a new approach
for protecting data privacy from attackers who retroactively obtain, through legal or other means, a
users stored data and private decryption keys. In this system, access is provided to the respective
users of the file only during the stipulated time period, after which the user cannot access the file.
Moreover, even if the user wants to access any file after the expiration period, then he has to request
the admin for access. Only after the access grant from the admin the user can access the file. The
usage of Shamir secret sharing scheme to store the private keys corresponding to the respective files,
adds to the overall security of the system. As the shares created from the keys are stored rather than
storing the key itself. Hence scheduled hiding of data system helps in tackling the data privacy
concern.
6. REFERENCES
1.

2.

3.

4.

5.

6.

Roxana Gaembasu, Tadayo shikohne, Amit A Levy and Henry M Levy, Vanish: Increasing
Data Privacy with self destructing data, 18th USENIX Security Symposium, USENIX
Association, Montreal Canada on August 10-14, 2009.
LingfangZeng, ShengjieXu, and DanFeng, SafeVanish: An Improved Data Self-Destruction
for protecting Data privacy, 2nd IEEE International Conference on Cloud Computing
Technology and Science (CloudCom), 2010 Date Nov. 30 2010 - Dec. 3 2010.
Tina Miriam John and John A. Chandy, Active Storage using Object based devices, Second
International Workshop on High Performance I/O Systems and Data Intensive Computing
(HiperIO08).
4] Yu ZHANG and Dan FENG, An Active Storage system for high performance
computing, 22nd International Conference on Advanced Information Networking and
Applications.
LingjunQin and DinFeng, Active storage Framework for object based storage device,
Proceedings of the 20th International Conference on Advanced Information Networking and
Applications (AINA06).
Seung Woo Son, et al., Enabling Active Storage on Parallel I/O Software Stacks, Mass
Storage Systems and Technologies (MSST), 2010 IEEE 26th Symposium. pp.1 12
31

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),


ISSN 0976 - 6375(Online), Volume 6, Issue 3, March (2015), pp. 24-32 IAEME

7.

8.

9.
10.
11.
12.

13.

14.

15.

16.

17.

18.

19.

20.

Yulai Xiett et al., Design and Evaluation of Oasis: An Active Storage Framework based on
TIO OSD Standard, Mass Storage Systems and Technologies (MSST), 2011 IEEE 27th
Symposium pp. 1 12
Yang Tang, Patrick P.C. Lee, John C.S. Lui, and Radia Perlman, Secure Overlay Cloud
Storage with Access Control and Assured Deletion. IEEE transactions on dependable and
secure computing vol. 9, no. 6, 2012
Cong Wang, Qian Wang, Kui Ren and Wenjing Lou, Privacy-Preserving Public Auditing for
Data Storage Security in Cloud Computing , INFOCOM, Proceedings IEEE 2010
Radia Perlman, File System Design with Assured Delete, Proceedings of the Third IEEE
International Security in Storage Workshop (SISW'05) 2005
Mesnier M, Ganger G and Riedel. E, Object-based storage: pushing more functionality into
storage, Potentials, IEEE Volume: 24, Issue: 2
Yingping Lu, David H. C. Du and Tom Ruwart, QoS Provisioning Framework for an OSDbased Storage System, Proceedings of the 22nd IEEE / 13th NASA Goddard Conference on
Mass Storage Systems and Technologies (MSST 2005)
Zhongying Niu et al., Implementing and Evaluating Security Controls for an Object- Based
Storage System, 24th IEEE Conference on Mass Storage Systems and Technologies (MSST
2007)
Yangwook Kang, Jingpei Yang and Ethan L. Miller, Object-based SCM: An Efficient
Interface for Storage Class Memories, Mass Storage Systems and Technologies (MSST),
27th Symposium IEEE 2011.
V. V. Dimakopoulos, A. Kinalis, S. Mastrogiannakis, E. Pitoura, The Smart Autonomous
Storage (SMAS) System, Communications, Computers and signal Processing,. PACRIM.
IEEE Pacific Rim Conference 2001.
Rajiv Wickremesinghe, Jeffrey S. Chase and Jeffrey S. Vitter, Distributed Computing with
Load-Managed Active Storage, Proceedings of the 11th IEEE International Symposium on
High Performance Distributed Computing HPDC-11 2002 (HPDC02)
Supriya Mandhare, Dr.A.K.Sen and Rajkumar Shende, A Proposal on Protecting Data
Leakages In Cloud Computing International journal of Computer Engineering &
Technology (IJCET), Volume 6, Issue 2, 2015, pp. 45 - 53, ISSN Print: 0976 6367, ISSN
Online: 0976 6375.
Sujay Pawar and Prof. Mrs. U. M. Patil, A Survey on Secured Data Outsourcing In Cloud
Computing International journal of Computer Engineering & Technology (IJCET), Volume
4, Issue 3, 2013, pp. 70 - 76, ISSN Print: 0976 6367, ISSN Online: 0976 6375.
Abhishek Pandey, R.M.Tugnayat and A.K.Tiwari, Data Security Framework For Cloud
Computing Networks International journal of Computer Engineering & Technology
(IJCET), Volume 4, Issue 1, 2013, pp. 178 - 181, ISSN Print: 0976 6367, ISSN Online:
0976 6375.
Bhavik Agrawal, Green Cloud Computing International journal of Electronics and
Communication Engineering &Technology (IJECET), Volume 4, Issue 7, 2013, pp. 239 243, ISSN Print: 0976- 6464, ISSN Online: 0976 6472.

32

You might also like