You are on page 1of 222

JO

OURNAL OF INFFORMATION SYSTEMS


S & OPEERATIONS MAN
NAGEMENT

Vol.. 11 No. 1
May 20177

EDITURA
A UNIVERSIITARĂ
B
Bucureşti
Foreword

Welcome to the Journal of Information Systems & Operations


Management (ISSN 1843-4711; IDB indexation: ProQuest, REPEC,
QBE, EBSCO, COPERNICUS). This journal is an open access journal
published two times a year by the Romanian-American University.
The published articles focus on IT&C and belong to national and
international researchers, professors who want to share their results of
research, to share ideas, to speak about their expertise and Ph.D.
students who want to improve their knowledge, to present their
emerging doctoral research.
Being a challenging and a favorable medium for scientific discussions,
all the issues of the journal contain articles dealing with current issues
from computer science, economics, management, IT&C, etc.
Furthermore, JISOM encourages the cross-disciplinary research of
national and international researchers and welcomes the contributions
which give a special “touch and flavor” to the mentioned fields. Each
article undergoes a double-blind review from an internationally and
nationally recognized pool of reviewers.
JISOM thanks all the authors who contributed to this journal by
submitting their work to be published, and also thanks to all reviewers
who helped and spared their valuable time in reviewing and evaluating
the manuscripts.
Last but not least, JISOM aims at being one of the distinguished
journals in the mentioned fields.
Looking forward to receiving your contributions,
Best Wishes
Virgil Chichernea, Ph.D.
Editor-in-Chief
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

GENERAL MANAGER
Professor Ovidiu Folcut

EDITOR IN CHIEF
Professor Virgil Chichernea

MANAGING EDITORS
Professor George Carutasu
Lecturer Gabriel Eugen Garais

EDITORIAL BOARD

Academician Gheorghe Păun Romanian Academy


Academician Mircea Stelian Petrescu Romanian Academy
Professor Eduard Radaceanu Romanian Technical Academy
Professor Ronald Carrier James Madison University, U.S.A.
Professor Pauline Cushman James Madison University, U.S.A.
Professor Ramon Mata-Toledo James Madison University, U.S.A.
Professor Allan Berg University of Dallas, U.S.A.
Professor Kent Zimmerman James Madison University, U.S.A.
Professor Traian Muntean Universite Aix –Marseille II , FRANCE
Associate. Professor Susan Kruc James Madison University, U.S.A.
Associate Professor Mihaela Paun Louisiana Tech University, U.S.A.
Professor Cornelia Botezatu Romanian-American University
Professor Ion Ivan Academy of Economic Studies
Professor Radu Şerban Academy of Economic Studies
Professor Ion Smeureanu Academy of Economic Studies
Professor Floarea Năstase Academy of Economic Studies
Professor Sergiu Iliescu University “Politehnica” Bucharest
Professor Victor Patriciu National Technical Defence University
Professor Lucia Rusu University “Babes-Bolyai” Cluj Napoca
Associate Professor Sanda Micula University “Babes-Bolyai” Cluj Napoca
Associate Professor Ion Bucur University “Politehnica” Bucharest
Professor Costin Boiangiu University “Politehnica” Bucharest
Associate Professor Irina Fagarasanu University “Politehnica” Bucharest
Associate Professor Viorel Marinescu Technical Civil Engineering Bucharest
Associate Professor Cristina Coculescu Romanian-American University
Associate Professor Daniela Crisan Romanian-American University
Associate Professor Alexandru Tabusca Romanian-American University
Lecturer Gabriel Eugen Garais Romanian-American University
Lecturer Alexandru Pirjan Romanian-American University

Senior Staff Text Processing:


Lecturer Justina Lavinia Stănică Romanian-American University
Assistant lecturer Mariana Coancă Romanian-American University
PhD Student Dragos-Paul Pop Academy of Economic Studies
JISOM journal details 2017

No. Item Value


1 Category 2010 (by CNCSIS) B+
2 CNCSIS Code 844
JOURNAL OF INFORMATION
3 Complete title / IDB title SYSTEMS & OPERATIONS
MANAGEMENT
4 ISSN (print and/or electronic) 1843-4711
5 Frequency SEMESTRIAL
Journal website (direct link to journal
6 http://JISOM.RAU.RO
section)
ProQuest

EBSCO

REPEC
IDB indexation (direct link to journal http://ideas.repec.org/s/rau/jisomg.html
7
section / search interface)
COPERNICUS
http://journals.indexcopernicus.com/kar
ta.php?action=masterlist&id=5147

QBE

Contact

First name and last name Virgil CHICHERNEA, PhD


Professor

Phone +4-0729-140815 | +4-021-2029513

E-mail chichernea.virgil@profesor.rau.ro
vchichernea@gmail.com

ISSN: 1843-4711
The Proceedings of Journal ISOM Vol. 11 No. 1

CONTENTS
Editorial

Rezarta Shkurti (Perri) BUSINESS REPORTING LANGUAGE – A SURVEY 1


Dionisa Allko WITH THE ALBANIAN COMPANIES AND
Elfrida Manoku INSTITUTIONS
Mihai Zaharescu BLIND IMAGE DEBLURRING USING A SINGLE- 17
Costin-Anton Boiangiu IMAGE SOURCE: THE STATE OF THE ART
Ion Bucur
George Căruţaşu FACILITIES AND CHANGES IN THE EDUCATIONAL 29
Mironela Pîrnău PROCESS WHEN USING OFFICE365
Laurenţiu A. Dumitru USING CLOUDS FOR FPGA DEVELOPMENT – A 42
Sergiu Eftimie COMMERCIAL PERSPECTIVE
Ciprian Răcuciu
Ştefan Iovan PREDICTIVE ANALYTICS FOR TRANSPORTATION 58
INDUSTRY
Costin-Anton Boiangiu MRC – THE THEORY OF LAYER-BASED DOCUMENT 72
Luiza Grigoraş IMAGE COMPRESSION
Alexandru Tăbuşcă MODERN TECHNOLOGIES AND INNOVATION – 83
Laura Cristina Maniu SOURCE OF COMPETITIVE ADVANTAGE FOR
TOURISM SMES
Gabriela-Angelica TEACHING SOFTWARE PROJECT MANAGEMENT: 96
Mihalescu THE COLLABORATIVE VERSUS COMPETITIVE
Alin-Gabriel Gheorghe APPROACH
Costin-Anton Boiangiu
Alexandru Pîrjan MANAGING GRAPHICS PROCESSING UNITS' 106
MEMORY AND ITS ASSOCIATED TRANSFERS IN
ORDER TO INCREASE THE SOFTWARE
PERFORMANCE
Adelina-Gabriela Chelcea TEACHING SOFTWARE PROJECT MANAGEMENT: 118
Alex-Cătălin Filiuta THE COLLABORATIVE VERSUS INDEPENDENT
Costin-Anton Boiangiu APPROACH
Marilena-Roxana Zuca PERFORMANCE MEASUREMENT OF AN ENTITY 128
FROM THE PERSPECTIVE OF FINANCIAL
STATEMENTS
Andrei Tigora DESIGNING A FLEXIBLE DOCUMENT IMAGE 140
ANALYSIS SYSTEM: THE MODULES
Dana-Mihaela Petroşanu IMPLEMENTATION SOLUTIONS FOR DEEP 154
Alexandru Pîrjan LEARNING NEURAL NETWORKS TARGETING
VARIOUS APPLICATION FIELDS
Diana-Maria Popa ON IMAGE RECOLORING - COLOR GRADING 169

Claudiu Dan Bârcă TETRA SYSTEM - OPEN PLATFORM - 183


INTEROPERABILITY AND APPLICATIONS
Alice Tinta FINANCIAL CONTROL IN AN IT ENVIRONMENT: 194
WARRANT OF THE FINANCIAL PERFORMANCE OF
THE ENTITY
Larisa Gavrilă FINANCIAL ADVANTAGES OF SOFTWARE 205
Sorin Ionescu PERSONALIZATION
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

BUSINESS REPORTING LANGUAGE – A SURVEY WITH THE ALBANIAN


COMPANIES AND INSTITUTIONS

Rezarta Shkurti (Perri) 1*


Dionisa Allko 2
Elfrida Manoku 3

ABSTRACT

Financial reporting is a crucial element in providing long term success to a business or


an institution. One of the newest concepts which have already gained credibility amongst
business environment is the Extensible Business Reporting Language, XBRL. Through
this paper we try to analyze the current status of XBRL in Albania, and to give an
overview of how much this concept is known by the accountants and auditors in our
country. We conduct a survey to collect their opinions and we find that they mostly get
information about new concept in accounting and/or auditing from after graduating
studies than from professional trainings offered by the accounting/auditing associations.
We also find that the respondents consider XBRL to have a lot of potential benefits but
they also know it may be too costly to be implemented by the companies they work with.
Regarding the current status of XBRL in Albania, we find that none of the companies,
organizations or institutions has started to implement or plan to do it in the near future.

JEL CLASSIFICATION CODE: M41

KEYWORDS: XBRL, financial reporting

INTRODUCTION

Extensible Business Reporting Language (XBRL) is a language pertaining in the XML-


Extensible Markup Language family. XBRL is defined as a standard which simplifies the
exchange of the financial statements, performance reporting and accounting data. Even
though XBRL is derived from XML, it is easily usable software and it does not require
previous knowledge about XML or Information Technology (IT). Virtually, any entity is
free to use XBRL to communicate its financial information and financial statement,
because it codes the data of these statements and tags the items of the reports in a way that
they can be read automatically by different programs and applications. Simply stating, the
XBRL is a tool which increases the benefits of the financial statements users.

1
* corresponding author, Professor PhD, University of Tirana, Department of Accounting, Rruga e Elbasanit,
Tirana, Albania. rezartaperri@feut.edu.al
2
MSc. in Accounting and Auditing, Personal Banker Raiffeisen Bank, Berat, Albania.
dionisa_allko@hotmail.com
3
Professor, University of Tirana, Department of Marketing, Rruga e Elbasanit, Tirana, Albania.
elfridamanoku@feut.edu.al

1
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For more than 15 years, since its first introduction in year 1999, the XBRL is
revolutionizing the business reporting features. It facilitates the communication,
transportation and information access with a cost-effective manner with no need for
intervention by a user and regardless the type of operating system or platform in use. It
also minimizes the human errors coming by manual entering or coping the information,
thus making the information sharing, processing and the decision making process itself,
faster and more reliable.
The XBRL is said to have revolutionized the business information reporting through
internet and other digital media the same way as the Hypertext Transfer Markup
Language (HTML) did for internet use many years ago. HTML is developed for
formatting and showing the information substance, whereas XBRL, a XML derivate, is
being developed not only to show the information, but also to keep and preserve the
information context by tagging it with special tags, part of XBRL taxonomy. This is a
very important feature about the XBRL, because even though internet research facilitates
obtaining inflation, without the appropriate context this information is note readily used
and requires a lot of time and resources to process it and to prepare it to be used for
special analyses.
Charles Hoffman, CPA is the person who first introduced the XBRL concept. He has
worked with the XBRL project in AICPA (the Association of International Certified
Public Accountants) since its inception in 1998. In 2006, he was honored by AICPA, for
his important contribution for XBRL project advancement. Raynier van Egmond is
another distinguished researcher with a great contribution for the XBRL project. He is an
IT specialist having more than 25 years of experience in ICT and who is an expert in the
financial and production industries projecting and researching. Since the beginning, he
has been part of XBRL project.
While it may seem that the concept of using XBRL for business reporting purposes is
widely covered in international empirical and theoretical studies, there are only a few
references in Albania Lamani D; Cepani L, 2011. Shkurti R, Demiraj R, 2013. Gruda S,
2014. Zyberi I, Rova L, 2014.) Nevertheless, the above mentioned papers do no
specifically address the XBRL concept (except for the last paper), but just mention the
concept somewhere in the article body. The introduction of XBRL to the Albanian
accountants and auditors may come, not only from academic papers but also through
other sources, such as academic textbooks, professional trainings or individual research.
Therefore, the main purpose of this study is to investigate about the current status of
XBRL application in Albania and to research the level of information that the accountants
and auditors in Albania have about the XBR language.
The rest of the paper is organized as follows: In the first part we will present a brief
literature review on the concept of XBRL. The literature review is exclusively based on
international publications only, as Albanian authors have not touched upon this new
concept yet. In the second part of the paper we lay out the methodology we have used to
conduct the research. Next we present and discuss the detailed results of the analysis and
in the end we summarize our main findings and conclusions.

2
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

1. LITERATURE REVIEW

Along the last fifteen years there are a lot of papers that touch upon the concept of XBRL.
The first study is Cohen et.al, (1999) who suggests that XML offers a lot of potential
benefits for accountants. In the same year, we find a lot of studies regarding XML in the
Journal of Accountancy (Anonymous 1999; Anonymous 1999; Fleming 1999; Harding
1999; Hoffman, et al. 1999), The CPA Journal (Anonymous 1999; Honig 1999; Kepczyk
1999; Schmidt and Cohen 1999), Practical Accountant (Anonymous 1999; Anonymous
1999; Anonymous 1999; Anonymous 2000), Business Wire (Business 1999; Business and
High Tech 1999; Business Editors 1999; Business/Technology 1999), and
Computerworld (Hoffman 1999).
The first journal that published about XBRL was the International Journal of Digital
Accounting Research which. In 2001 it was published the research of Bonson Ponte “The
role of XBRL in Europe”, where the author argues that XBRL is the answer of the needs
for information in the new century and that XBRL will facilitate the communication of
information in a homogeneous way. Since that time there have been a lot of publications
in a variety of academic journals what suggest that, the academia (at least) is completely
committed to develop and manage a new technology for business reporting that will affect
the way the financial statements are prepared, presented and delivered to potential
investor in a real time manner.
Roohani et.al (2010) in an analysis about the XBRL literature during the decade report
that from 1998 up to 2008 there have been published more than 53 papers about the
XBRL language with an increasing trend during the latest years. We may only imply that
the same trend is still going on after 2008. During this time frame different authors attest
the significant theoretical and practical development of XBRL. Baldwin et.al, (2006) in
their study argue that XBRL will both simplify disclosure and ease the communication of
financial information to users, analysts, and regulators via the Internet. But Boritz and No
(2005) state that the development of XBRL until that phase was not sufficient to
guarantee information integrity and security. Still, XBRL allows such diversity that few
other schemes have considered before (Williams, Scifleet and Hardy, 2006). Yet, internet
reporting can help companies to move far from traditional reporting towards a sustainable
and advanced communication of information (Isenmann, Bey and Welter, 2007).
In 2007 the International Federation of Accountants performed a survey with the
accountants, auditors, as well as with standard setting and regulatory bodies. The results
revealed that during the last five years the participants perceived that the corporate
governance mechanisms, the financial statement preparation and the financial statement
auditing processes had been significantly improved, but that the financial reports
themselves have not become more useful to them. The same survey also pinpointed a
great support from the academics and accountants all over the world toward XBRL,
especially from the European and Asian academics and accountants. In the same year the
study of Gunn, (American Accounting Association, 2007) discusses about the benefits
and opportunities in relation to the use of XBRL. One year later, in 2008, a study from
Premuroso and Bhattacharya found that the decision to use XBRL format to report
financial information is positively correlated with its corporate governance.

3
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In 2009 Ristenpart et.al descripted XBRL as “a free kingdom, open for software using
XML to describe financial information for public and private companies”. This
description confirms the similarity of XBRL with XML in that it can be used to describe
financial information based on data tagging. Another important fact from this description
is that XBRL is an open and free standard in that there is no license to be paid in order to
make use of it. Often XBRL is also referred to as “interactive data” (Garbellotto, 2009a,
pg. 56). This means that the data is used under a new and innovative way and that using
XBRL, the data exchange and processing is improved.
Pinsker and Li (2008, pg. 47-48) included the non-financial information in their
description of XBRL by stating that XBRL is a web-based XML derivate, used to tag and
add meaning to the financial and non-financial information. But, the web-based part is
probably wrong, because being a XML derivate; the XBRL does not depend on web to
storage and manipulate the data.
In the same year (2008), Dzinkowski compared the XBRL technology with the barcoding
regime (the Universal Product Code) concluding that the XBRL is an information
technology language; it is not a barcoding regime specifically designed for a better
financial information management. XBRL is similar with UPC, in that it represents a form
of standardization which in turn enhances the necessary efficiency during the process.
UPC facilitates a better inventory management, thus reducing its overall costs, whereas
XBRL facilitates a better information management for company.
There are several difficulties and even threats regarding the long term survival and
application of XBRL. The unsafe internet environment and the opposing accountants and
auditors are just a few of these threats. The studies in the future may want to focus on
finding out how to enhance the security during XBRL application projects, or even how
XBRL taxonomy may be adopted in different jurisdictions, helping the worldwide
financial reporting environment.

2. METHODOLOGY OF THE STUDY

The main purpose of this paper is to collect information in order to asses the level of
information and knowledge that the Albanian accountants and auditors have, regarding
the XBRL concept. To achieve this objective we have used a survey through which we
aim to answer questions related to XBRL, such as the source/s of information regarding
this concept, possible implementation in Albania, the institutions which may be held
responsible to introduce and implement the XBRL taxonomy, the impact it may have on
the accounting and financial reporting standards, the types of companies that might use
and benefit from it and the possible reasons it is not yet implemented.
In order to perform this study through the survey tool we first defined what would be the
population of the study. We decided to include in the population the chartered
accountants, the certified auditors, the accounting academics and professors, the
accounting and financial officers, as well as master level accounting and auditing students
who had part-time or fulltime jobs. Next, out of this population, we identified a random
sample of 300 individuals and sent the survey to them using electronic way of delivering.

4
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

We assume that studying the selected sample will reveal the level of knowing the XBRL
concept by the population.
The survey was electronically sent to all the certified auditors of Albania registered in the
registrar of the Institute of Certified Auditors in Albania, by the time of the study; to the
professors of accounting, auditing and finance of the University of Tirana and of some
other non-public universities; to some internal auditors working in several banks in
Albania; to some auditors working for the big four auditing companies in Albania; to
some accounting and financial officers working for several Albanian companies in the
cervices sector and to some of the master in accounting and auditing students who are also
working in profession. We got back 75 completed questionnaires out of 300 sent out
which represents a response rate of about 25 %. According to Nulty (2008) on line and
electronic questionnaires usually have a lower response rate (from 20% to 30%) compared
to paper-based questionnaires, but this does not imply that they are to be considered less
accurate, only that they have been completed by the individuals who really are attracted to
the subject of the survey. Therefore we consider a response rate of 25 % to be an
acceptable and reliable statistics to perform the analysis of the collected data.
The questionnaire consists of ten questions, each of them short and concise because we
believe that long electronic questionnaires with ambiguous questions are overlooked by
the respondents, therefore not yielding reliable conclusions. In average it took three
minutes to the respondents to complete the questionnaire. We used the Survey Monkey
website to compile and process the questionnaire. The survey was conducted for a period
of five weeks.
We encountered several difficulties during the phase of completing the survey such as,
choosing the most reliable way to deliver and complete the questionnaires, defining and
expressing the simple and direct questions, the definition of the population of the study,
the selection of the sample, delivering and then gathering and finally processing and
analyzing the completed questionnaires. We relied exclusively on the Survey Monkey
tools and Excel to conduct the analysis of the questionnaire.

3. RESULTS OF ANALYSIS

We start the questionnaire by asking the participants in the study whether they have ever
heard about the XBRL during their professional career or not. Based on their answers we
can categorize the respondents in two preliminary groups. In figure 1 we see that 56 % of
the respondents have some level of knowledge about the XBRL concept, whereas 44 % of
them have never heard about XBRL before. Because more than half of the sample has
some sort of knowledge with this concept we continue asking and gathering some other
information from them.

5
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

Figgure 1. Familiaarity with the X


XBRL concept

W
We ask the respondents who w had nevver before heeard about thhe XBRL, wh hich is the
prrimary sourcce, they thinnk they shouuld obtain information abbout a new concept in
acccounting/finnancial reporting. This is a multiple quuestion with several
s option
ns such as:
peeriodic inforrmation elecctronically seent from acccounting proofessional asssociations,
prrofessional trrainings organnized by the affiliation insstitutions, posst graduating studies, or
annother optionn left blank soo that the resppondents couuld answer. With
W a result of o 43 % we
fiind that periodic inform mation electronically seent from accounting
a professional
p
asssociations iss considered the
t most impportant sourcee of informatiion. This may y be due to
thhe exceeding gly high exppectations thaat the accouunting professsionals havee from the
asssociation/s thhey are affiliaated to. The second
s most iimportant souurce, getting 28
2 % of the
annswers, is thhe institutionss were the reespondents currently worrk in, whereaas the third
coomes the posst graduatingg studies, witth 18 %. Figgure 2 repressents the resu ults of this
quuestion.

6
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

F
Figure 2. The primary
p sourcees to obtain info
formation abouut new conceptss in accounting
g / finance;
subgrroup 1 (did nevver hear about XBRL before)).

Thhe same quesstion was maade to individduals who haad knowledgee and informaation about
X
XBRL. The reesults are reppresented in figure 3. Wee found that this group regarded as
prrimary sourcee of informatiion the post graduating
g stuudies with 37%. Next com
me the other
soources1 with 29 % of the answers; thiird, with 18 % comes thee professionaal trainings
orrganized by the institutioons were thee respondents currently work w in, and
d 16 % is
reepresented byy periodical information sent electroonically by accounting
a professional
boodies. We may see a diffference exist between the two subgrouups of the saample. The
inndividuals whho have never heard abouut XBRL befoore are more focused to obtain o new
innformation froom periodicaal informationn sent electroonically by accounting
a professional
boodies, whereaas the indiviiduals who hadh some levvel of knowlledge about XBRL X (no
m
matter how deeep this knowlledge is), are more prone to look for updates in post graduating
stuudies. An im
mportant concllusion we maay draw is thaat the XBRL concept is well
w defined
annd explained in academic curricula, buut not well prresented in thhe professionaal trainings
deelivered by prrofessional acccounting boddies in Albaniia.

1
This represent sources that maay come acrosss an individual during his eveeryday job or sttudy and the
resspondents have stated several such
s as internett research, foreign clients of thhe company the individual is
cuurrently working
g, IASB website, etc.

7
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

Figure 3. The primary sourcees to obtain infformation abouut new conceptts in accounting / finance;
subgroup 2 (have some level
l of inform
mation about XBRL).

A
After the first three questioons which aree presented annd analyzed in
i the above paragraphs,
p
w
we have contin nued the survvey counting on the answeers of the seccond subgrou up only (the
inndividuals wee had some deegree of inforrmation aboutt XBRL). It iss obvious thatt we cannot
annalyze inform
mation and annswers comingg from responndents for whhom XBRL was w a totally
neew concept.
T
The fourth qu uestion focusses on the immplementatioon of XBRL taxonomy in i Albania,
w
whether it maay be possiblee or even necessary. 93 % if the resppondents havee answered
poositively statiing that they saw
s a lot of potential
p impllementing thiss new technological tool
inn Albania. On nly 7 % thinkk XBRL is noot yet necessaary to be pressented and im
mplemented.
W
We may arguee that the proofessionals whho have answ wered positiveely may noticce the need
too prepare finaancial informmation in the same standarrds as their foreign
f counterparts and
coolleagues do. Therefore they t considerr it very impportant to unnify and stand
dardize the
innformation flow in order to guaranteee integration and cooperattion across all levels of
fiinancial administration.

8
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Figure
F 4. Is XB
BRL implementtation necessarry or possible in
i Albania?

A
As a consideerable majorrity of the sample has responded positively th hat XBRL
immplementation n is possible and
a necessaryy in Albania wwe try to reveeal who they perceive is
thhe right institu
ution(s) to unndertake such a project. We depict theirr answers in figure
f 5. 33
% of the respo ondents thinkk this the Geeneral Tax Directory (GTD) is the stru ucture that
shhould take thee initiative to first implem
ment the XBR RL taxonomy in Albania. 28 2 % of the
reespondents th hink it shouldd be the Finanncial Superviision Authoriity, 19 % hav ve selected
thhe National Registrar
R Centter, and 9 % are
a for the Miinistry of Finnances, while 12 % have
chhosen the Insttitute of Statistics and Nattional Council of Accountiing. We may notice that
G
GTD is consid dered a possibble institutionn by many off the responddents becausee they view
thhe tax information as veryy important annd therefore it should be gathered and d processed
w
with an innovaative platform m such as XBRL which faccilitates betteer controls and real time
reeporting.

9
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

Figure 5. Albanian Institutions that maay implement XBRL


X

Figure 6. Will
W XBRL haave any impact on accountingg and financial reporting standards?

100
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Inn the sixth quuestion of thee survey we turn to ask thhe respondennts if they peerceive that
X
XBRL taxonom my will impact the accounnting and finaancial reportinng standards. In figure 6
w
we may see th hat 72 % of thhe respondennts have answ wered negativeely, stating th
hat they do
noot expect XB BRL to havee any impact on financiall reporting standards. Strruggling to
raationalize on their
t answer, we turn to thhe literature reeview conclussions where we
w may see
thhat several auuthors report that XBRL, as much as it transforms and revolu utionize the
finnancial inforrmation repoorting does not,n impact any of the local or in nternational
acccounting stanndards.
Thhe seventh qu uestion in thee survey was an open endeed one becausse we tried to o gather the
reespondents’ opinions
o on which
w may bee the industriees and compaanies that might benefit
from using thee XBRL platfform and taxoonomy. 40 % of the respoondents say that t the big
puublic compan nies may bennefit from impplementing X XBRL, and thhey argue theeir opinion
stating that th hese compannies need tim mely and transparent reeporting. 19 % of the
reespondents saay that the fiinancial instittutions wouldd greatly bennefit from using XBRL
beecause these companies
c neeed to preparee their financiial reports in an efficient manner
m and
beecause they need
n to reportt to many parrties (local suppervision autthorities, tax authorities,
a
paarent compan nies, etc.) probbably XBRL would facilittate the multii-fold reportin ng process.
W
We may see in n figure 7 thatt 8 % of the respondents thhink that the public
p instituttions could
allso benefit fro
om implemennting XBRL and a they base their opinionn in the need to improve
effficiency, costt savings and timely comm munication off the informatiion.

Fig
gure 7. Compannies and industtries which mayy benefit from
m using XBRL

W
We also try too collect inforrmation on thhe perceived benefits from m implementiing XBRL.
Thhis question is designed with
w several listed
l benefits from whichh the respond dents could
chhoose through h a five leveel Likert scale. The listed benefits included as optiions in the
quuestions weree: (1) more reeliable and acccurate data; ((2) enhancingg information quality for

11
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

deecision makin ng; (3) greateer automatizattion; (4) cost savings and (5) faster pro ocessing. In
fiigure 8 we maay see that more
m than halff of the responndents think that faster pro
ocessing of
daata is one of the greatest benefits
b from
m XBRL impllementation, probably
p because of the
immportance off exchanging information on a timely m manner in orrder to gain competitive
c
addvantage. Mo ore than half of the responndents have rregarded automatization an n important
beenefit as welll. Cost savinggs is not regarrded a major bbenefit, probaably because of the huge
innitial investm
ment in such ann advanced teechnology.

Figuure 8. Benefits from XBRL im


mplementationn

Itt is interesting
g to witness so
s many beneefits of XBRL L tool and to not
n have it im
mplemented
yeet. Therefore the next question in the suurvey asks thhe respondentts why they thhink XBRL
iss not yet appllied in Albania, in the insttitutions that they think would
w benefit from using
thhis tool. Seveeral options have
h been prrovided for w which the resppondents would answer
w
whether positiively or negaatively: (1) itt would not work for thee specific com mpany; (2)
X
XBRL is not mandatory; (3) XBRL does not addd value; (4)) its implem mentation is
exxtremely com mplex and beyyond the orgaanization’s cappacities; (5) its
i implementtation is too
exxpensive; and d (6) other reeasons. Fromm results in figure 9 we may
m see that most
m of the
reespondents th hink that XBRRL is not yet implementedd because it iss not mandatoory by law.
(AAlso, other reeasons as pressented in this figure.)

122
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Figure 9. Reasons
R why XBRL
X may nott be implementted yet

Thhe last questiion in the surrvey, addresses the currennt status of XBRL
X in Albaania, in the
reespective orgaanizations whhere the respoondents are ccurrently worrking. They are a asked if
inn their organizzation the XB
BRL is: (1) XBRL
X topic iss not researchhed yet; (2) XBRL
X topic
is still under research abbout possiblee implementaation, or (3) XBRL is thoroughly
reesearched andd is about to be
b (at least parrtly) applied iin the organizzation.
A
As expected, 94 9 % of the respondents have answered that the organizations
o they work
w
with have not yet started too research thhe topic of XXBRL. Only 5 % of the reespondents
annswer that thheir organizations have alrready started to research about the po ossibility to
appply XBRL. We may furrther add thaat the latter part of the respondents
r are
a mainly
emmployees of the big fourr auditing com mpanies operating in Albbania. Only one of the
reespondents haas stated thatt his organizzation has alrready appliedd the XBRL tool. (We
fuurther researcched who this answer cam me from andd we found that
t this indiv
vidual was
w
working for a company which w providdes accountiing and finaancial softwaare for the
A
Albanian mark ket. To avoid any confusioon we contaccted the comppany offices and
a further
innvestigated ab bout the XB BRL feature. As a concluusion we weere confirmeed that the
coompany was actually offfering an acccounting sofftware whichh possessed the t XBRL
feeature, but cuurrently this feature was not
n activatedd or in use by none of itss Albanian
cllients, because of its high cost and becaause none of the companiees saw it as a necessary
toool).

13
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

Fiigure 10. The status


s of your organization
o reegarding XBRL
L application

4.. MAIN FINDINGS AND


D CONCLUS
SIONS

X
XBRL is a verry important toolt for efficieent and fast innformation reeporting. It is being used
inn many coun ntries and byy many insttitutions and companies to unify thee reporting
pllatforms and to convert multi-reporting
m g platforms inn a unique reeporting langu uage. A lot
off benefits aree often assocciated with thhis reporting language, buut it is also costly
c to be
immplemented, as any otherr advanced teechnological innovation. Despite D its coonsiderable
coosts, the XBBRL has alreaady gained a lot of suppporters and is endorsed by b a lot of
coountries. As it
i is not expeccted to requirre changes orr adaptation inn the financiaal reporting
sttandards it haas also attractted attention on behalf off the IFAC, and for some years now,
thhe Federation
n has includedd the XBRL inn its agenda.
Inn this paper we try to coollect informaation on the current status of XBRL in i Albania.
W
Whether this is a new concept for thee accounting and auditing professionaals or they
allready have any
a information about it. HowH do they perceive thiss new tool, reegarding its
beenefits and costs?
c We alsso try to anallyze if there are any comppanies or insstitutions in
A
Albania, which, by the opinion of the respondents may benefit from implem menting the
X
XBRL.
W
We use a su urvey to control for thhe above ressearch questions. The survey was
ellectronically sent and proocessed and each
e of the tten questions of the questtionnaire is
annalyzed in details in this study. We found that m most of the accounting
a an
nd auditing

144
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

professionals in Albania have already some degree of information about XBRL.


Nevertheless we also found that a good part of the sample had never heard before about
this tool. The most important source of information is the post graduating studies, whereas
the professional trainings offered for the respondents never even mentioned the concept.
Therefore we conclude that it is time that the training programs of the accounting
association include this new concept as well as other concepts in what they offer for their
participants.
We also find that the professionals perceive that XBRL potential implementation is
characterized by a high cost and a lot of benefits. The respondents consider it a possible
business reporting language to be used by the General Tax Directory, National Registrar
Center, or even by big companies and financial institutions in Albania, because of the rich
and wide information they have to process and archive. We also tried to study the current
status of XBRL in Albania, and we found that none of the companies, organizations or
institutions has started to implement or plan to do it in the near future. Therefore, this
concept is still under discussion only in a theoretical level.

REFERENCES

[1] Baldwin A, Brown C, Trinkle B. “XBRL: An Impacts Framework and Research


Challenge”. Journal of Emerging Technologies in Accounting, December 2006,
Volume 3, Issue 1.
[2] Bonson Ponte. “The Role of XBRL in Europe”. The International Journal of Digital
Accounting Research, ISSN 1577-8517, Vol. 1, Nº. 2, 2001, 101-110.
[3] Boritz JE, No WG, “Security in XML-Based Financial Reporting Services on the
Internet”. Journal of Accounting and Public Policy, 2005.
[4] Cohen, S. Kanza, Y, Kogan, Y. Nutty, W. Sagiv, Y. Serebrenik, A. “EquiX Easy
Querying in XML Databases”. WebDB, 1999.
[5] Dzinkowski R. “Do you speak XBRL?” CA Magazine, December 2008. available at
http://www.camagazine.com/archives/printedition/2008/dec/features/camagazine41
18.aspx
[6] Garbelloto G. “XBRL Implementation Strategies: The Deeply Embedded
Approach”. Strategic Finance 91.5 (Nov. 2009): 56 – 57, 61.
[7] Gruda S. “Accounting Web Reporting in Albania”. Mediterranean Journal of Social
Sciences, Vol 5, No 13, 2014.
[8] Gunn J. “XBRL: Opportunities and Challenges in Enhancing Financial Reporting
and Assurance Processes”. Current Issues in Auditing December 2007, Vol. 1, No.
1, A36-A43.
[9] International Federation of Accountants, 2007 survey and publishing. www.ifrs.org.
[10] Isenmann R, Bey C, Welter M. “Online Reporting for Sustainability Issues”.
Business Strategy and the Environment, 2007.

15
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[11] Lamani D, Cepani L. “Internet Financial Reporting by banks and insurance


companies in Albania”. The Romanian Economic Journal, Year 2011, XIV, 159-174.
[12] Nulty D. “The adequacy of response rates to online and paper surveys: what can be
done?” Assessment & Evaluation in Higher Education, Vol. 33, No. 3, June 2008,
301 - 314.
[13] Pinsker R, Li Sh. “Costs and Benefits of XBRL adoption: early evidence”.
Magazine Communications of the ACM – Urban sensing: out of the woods. Volume
51, Issue 3, March 2008, 47 – 50.
[14] Premuroso R, Bhattacharya S. “Do early and voluntary filers of financial
information in XBRL format signal superior corporate governance and operating
performance”? International Journal of Accounting Information Systems, Volume
9, Issue 1, March 2008, 1 – 20.
[15] R.Shkurti, R.Demiraj. “Accounting and Financial Web Reporting in Albania”.
Journal of Public Administration, Finance and Law, Issue: 4 / 2013, 148-158.
[16] Ristenpart Th, Tromer E, Shacham H, Savage S. “Hey, you, get off my cloud:
exploring information leakage in third-party compute clouds”. CCS ‘09
proceedings of the 16th ACM Conference on Computer and Communications
Security, 2009, 199 – 212.
[17] Roohani S, Xianming Z, Capozzoli EA, Lamberton B. “Analysis of XBRL
Literature: A decade of progress and puzzle”. The International Journal of Digital
Accounting Research, V.10, 2010.
[18] Williams SP, Scifleet PA, Hardy CA. “Online Business Reporting: An Information
Management Perspective”. International Journal of Information Systems, 2006
[19] Zyberi I, Rova L. “The Role of the Accountants in the Framework of the Modern
Technological Developments and Digital Accounting Systems”. European Scientific
Journal. 2014.

16
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

BLIND IMAGE DEBLURRING USING A SINGLE-IMAGE SOURCE: THE


STATE OF THE ART

Mihai Zaharescu 1*
Costin-Anton Boiangiu 2
Ion Bucur 3

ABSTRACT

The problem of accidental corruption and restoration of altered or damaged signals from
intentional tampering was long-studied. Image deblurring is an example of signal
restoration: the recovery of an approximate of the original image, which was convolved
with a point spread function and altered by additive noise. This work is an introduction in
the deblurring domain, is meant to offer some definitions and proposes an analysis of the
existing methods for deducing the point spread function from a blurred image. One of the
methods has a big importance for the second part of the paper: the cepstral domain,
through which only close to rectilinear kernels can be recovered at the moment. We will
point that cepstrum analysis is at least as powerful as other methods.

KEYWORDS: blind deblurring, deblurring review, cepstrum analysis, image


reconstruction.

1. INTRODUCTION

The point spread function (PSF) is the system's response to an impulse. For example, a
camera's PSF can be approximated by taking a photography of a single small light source,
like a star. Affecting a single pixel by convolution will result in the convolution's PSF
itself.
The image deblurring problem can be broken into two subproblems: first to recover the
PSF and the initial estimate by using a known PSF. The methods of blind deconvolution
concentrate on recovering the PSF while non-blind methods use a known PSF for
performing robust deconvolution.
Even if blind deblurring is an ill-posed problem, the PSF can, in practice, be estimated
from a single image, by imposing restrictions. These can be a prior knowledge about how
edges should look like, at the local level [4], or how the gradients’ histogram should be
shaped like [5]. But as more images are included in the equation, the more accurate the
1
* corresponding author, Engineer, Politehnica University of Bucharest, 060042 Bucharest, Romania,
mihai.zaharescu@cs.pub.ro
2
Professor PhD Eng., Politehnica University of Bucharest, 060042 Bucharest, Romania,
costin.boiangiu@cs.pub.ro
3
Associate Professor PhD Eng., Politehnica University of Bucharest, 060042 Bucharest, Romania,
ion.bucur@cs.pub.ro

17
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

results become. The information from multiple images can be used in generating good
original estimates [2][3], or in the case of video deblurring, in tracing the moving object
and considering the motion path as deblur kernel [1].
The non-blind deconvolution methods focus on minimizing the important impact additive
noise has in deblurring with a known PSF [6] or the removal of artifacts originating from
the estimations of approximate PSF [7][8][9] and data truncation in the altered image
[10][11].
This paper is part of the work done during the 2 years of master studies at the faculty of
Automatics and Computers from the “Politehnica” University of Bucharest by the author
of the paper [36] and it includes research done by other researchers in the deblurring
domain.

1.1. Convolution

The cause of the information alteration in blurred images is the mathematical operation of
convolution. The convolution of two functions produces a third function that resembles
the first, but which contains characteristics of the second on its entire domain. A delay is
also introduced in the signal. Convolution is a typical case of alteration of the signal in the
case of an electrical signal passing through a long wire.
Mathematically it can be defined by: [21]

( f ⊗ g )(t ) = ∫ f (T )g (t − T )dT
−∞

Or more intuitively, it can be considered that the function which alters the signal is a
function of weights which are used to generate a weighted sum of the neighbors. In the
presented formula, the neighborhood is infinity.
Because it is intended to work on digital images, this is the definition for convolution in
the 2D discrete space: [22]

( f ⊗ g )(x, y ) = ∑ f [i ][ j ]g [x − i ][ y − j ]
−∞

Figure 1. It is evident that the image's frequencies are multiplied by the bokeh kernel's frequencies
because in spatial domain the kernel's characteristic is present in every pixel of the image.

18
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Thhe conclusion of this tallk is that thee result is similar with the input an nd contains
chharacteristics of the kernel. As the kerrnel characterristics are preesent all overr the image
thhis implies thaat if in spatiall (or pixel possition domainn) the kernel'ss pattern can be
b found in
evvery pixel's neighborhood
n d; in the frequuency domainn, every pixeel of the image kernel's
"ffingerprint" iss added to thhe image's freequencies. Meeaning that inn the frequen ncy domain
thhe image's frequencies are multiplied byy the kernel's frequencies. (Fig. ( 1)

1.1.1. Anotherr method of caalculating coonvolution


Based on the observation
o t
that convolutiion is a multtiplication of the images frequencies
f
w
with the signall's frequenciees, the same result
r can be obtained in logarithmic
l tiime just by
m
multiplying thee transforms of
o the imagess. [26]
( f ⊗ g )(t ) = Fourier
F −1
[F (v )G(v )](t )
D n is the inverse transformattion that restoores the signall in the space domain.
Deconvolution

1.2. Decon
nvolution

D
Deconvolution n is the nam me of the prrocess that rrestores the initial signall from the
coonvolved signnal with a knnown convoluution kernel. B
Blind deconvvolution is an
nother term
ussed to descriibe an ill-possed mathemaatical problemm that tries to
t recover booth signals
(im
mage and kerrnel) from jusst the corrupteed signal.

1.2.1. Inverse Filtering


F
Inn conclusion, in the frequency domain, a blurred imaage, B , is thee result of mu
ultiplying a
cllear original image C wiith a point sppread functioon P : B = C ⋅ P . Deconv volution is
B
caalculated the other
o way aroound: C =
P
However, P contains
H c manyy elements thhat are close tto 0, making this operation unstable.
(FFig. 2) Demon nstrates the not
n (very) useeful result obttained withouut adding a coonstraint or
reegularization.

Figure 2.. Deconvolutioon without reguularization on bblurred image with


w additive noise
n

Thhe observed noise


n is very strong in thee naive methood and has a high frequenncy. Strong
frequency elem ments are obbtained whenn P has a vvery small value, v thus, in order to
stabilize the so
olution is to reemove the sm
mall elements from the division:

19
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

 1
 P[t ] γ P[t ], P[t ] < γ
g [t ] =  where g represents the 1/n factor.
1
 , P[t ] ≥ γ
 P[t ]
T
This method is called Innverse Filterring [23]. Thhe strong hhigh-frequency
y noise is
elliminated, bu
ut propagatinng waves generated
g by unknown boundaries
b or
o wrongly
esstimated PSF are still eviddent.

Figure 3.
3 Inverse filterring on a naturaally blurred im
mage.

1.3. Regu
ularization Techniques

R
Regularization
n techniques attenuate thee impact facttors that unknnown borderrs, noise or
noon-optimal PSF estimationn have, by inttroducing addditional inform
mation in the system.
B
By artificially limiting smaall values in frequency
f dommain a regulaarization wass applied to
thhe deconvoluttion. One othher method iss to introducee a constraint in the equatiions, so the
tootal variance of the resultts have a small value. Thhe total variattion of the siignals with
exxcessive and possible fakee detail is veryy high.
B
By using oriennted wavelet packets,
p the thhree authors ddiscovered a signal residin
ng even in a
hiighly noisy photo resultiing from decconvolution, a useful siggnal which is powerful
ennough, separaable and recovverable.

1..3.1. Richard
dson-Lucy decconvolution
T
The Richardso on-Lucy methhod is an iteerative spatiall-domain appproach that in ncludes the
reegularization in the deconvvolution. [22]] The base ideea is that the resultant imaage must be
siimilar to the blurred imaage, but also that the bluurred image must m be the result of a
coonvolution of o the resultaant image with
w a resultant kernel. TheT two esttimates are
itteratively refin
ned.

 B 
C (ji +1) = C (ji ) ⋅  (i ) P  where P̂ = P (i − n )( j − m ),0 ≤ n, m ≤ i , j andd the initial
⊗ P̂
C ⊗P 
 j 
C1 is any imagge.

200
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

1.3.2. Wiener Deconvolution


Wiener deconvolution works in the frequency domain. It was originally a noise filter
aimed at correcting delayed signals from radar machines, by multiplying with a correction
function g (t ) : r (t ) = g (t ) ⋅ [C (t ) + N (t )] where C (t ) is the original clear signal, N (t ) is
noise and r (t ) is the function intended to equal C (t + a ) (the delayed signal because of
the transmission distance).
Later, this filter was adapted to work for functions like:
B(t ) = C (t )* P(t ) + N (T ) where P = PSF. The solution to this convolution affected by
additive noise is the Wiener Deconvolution [28]:
P (t )SC (t )
g (t ) = where SC is the clear image spectrum and S N the noise
P(t ) SC (t ) + S N (t )
2

spectrum.
P with a greater power at the denominator makes the function act as a deconvolution. It
has an extra filter for removing noise with a known spectrum (image signal / (image
signal + noise signal)). The estimated clear image is:
C = B⋅g .
Any of the deconvolution algorithms can be used with similar results on images
containing small noise intensities.

2. PREVIOUS WORK

2.1. Blind deconvolution

The harder problem is the deduction of the kernel that generated the distorted image, using
only the distorted image as input. This is a problem with origins in spatial domain research.
Because stars are point-like elements, the solution is relatively simple. The telescope's PSF
can by these methods be deduced only by taking a photograph of a distant star.

2.1.1. Using gradients


However, for a natural image, the first solution came just in 2006 with Fergus's work [5],
where he observed that all natural clear photographs have a similar histogram of
gradients. The shape of a histogram is changed by a blur transformation because motion
blur inserts similarly oriented gradients over the whole surface of the image.
The PSV is estimated by going from small resolution to great resolution and tries to fit the
resulting latent image into the mathematical gradient distribution by varying the PSF.
His work was refined in [6] by filtering the gradients used for estimation of the PSF. In
opposition to all expectations, objects smaller than the kernel will degenerate the
prediction, yet they should be ignored. The construction method was also changed by
forcing the kernel to become a connected motion path.

21
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2.1.2. Using iterative methods


Iterative methods use the result from a previous step in order to compute the following
image.
In [3] two input images are used as starting point: a noisy and a blurred image. A
denoised image has clear edges, in precise positions. The filtered image is the latent
image in the iterative kernel estimation algorithm. According to the result, the deblurred
image can be used to remove the noise from the sharp image. The sharp image is used to
clear the waves from the deblurred image.
The more images are introduced in the deblurring equations, the smaller the ratio of
unknowns to knowns becomes and more accurate results can be obtained, as you can
notice in the example [2].

2.1.3. Cepstral analysis


An observation was made that taking the Fourier transform of the logarithm amplitude of
the Fourier transform reveals shapes similar to the motion blur kernel. Unfortunately, this
method is limited to linear movements because the recovered PSF is somehow over-
imposed with different orientations. An approach interprets the cepstrum geometrically,
imagining that the same shape is over imposed, and manages to find curves with small
curvature. [24] (Fig. 4)

Figure 4. Estimation using geometrical cepstrum analysis. Image from [24]

2.2. Non-uniform motion blur

The majority of the deblurring approaches consider a shift-invariant linear blur model,
meaning that the image is blurred the same way everywhere. True, but only if the
photographed objects are situated at the same distance, or at considerable distances from
the camera, in order to avoid introducing perspective blur, and the camera follows only a
translational movement in a plane which is parallel to the objects. As the description
shows, very few images fall in this class of alterations. The simplest example to point that
the blur kernel suffers changes at every pixel of the image is the rotational motion blur.
Blur occurred by individual moving objects is even more difficult to describe.
Some approaches try to deblur moving objects from static backgrounds. One approach is
to segment the image based on blur direction [13][15] (Fig. 5). They are cut by means of
spectral mating [14], thus maintaining the transparent shading induced by the blur.
Another method is to deduce the movement by reducing the local kernel to a line but in
multiple zones of the image. [4]

22
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Figure 5.
5 Splitting of a blurred image using the bluur information. Image from [1
15].

Innstead of splittting the imagge, another possibility is to change the keernel over thee surface of
thhe image durring the debllurring proceess (only an iterative spaatial-domain blurring is
avvailable for th
he researcher ini this situatioon) [25]. The idea is to gennerate a 3D keernel based
onn the real cam
mera movemennt deduced froom the photoggraph and projject it over thee image.

2.3. Artifa
act Minimizaation

2.3.1. Deringin
ng
Thhere are somme approachess for estimatiing the PSF aand removingg a large amo ount of the
am
mplified noisse. A differennt artifact thaat is generateed during the deblurring process is
rinnging. The difference between
b the estimate annd the exactt information n is being
prropagated by adding and subtracting the t residual vvalues, generrating a perio
odic ripple
neear the aberraances.
A method simiilar to the onee presented inn noisy/blurryy image pair iss to use only the blurred
immage as a baase in order to t estimate thhe ringing arrtifacts. [9] The
T initial esttimate is a
deeblurred image. By utilizzing the uncclear photo, the ring rem mover deducees uniform
paatches that may
m suffer froom long rangge ringing, resulting from far away strong edges.
Thhen, small reg gions are idenntified aroundd edges in thee clarified im
mage, which might
m suffer
from short rang ge ringing. Afterward,
A the waves are clleared by a fillter that is dep
pendent on
thhe size of the wave
w and thee distance fromm the edge.
Siimilarly, morre estimates can
c be built from
f differennt scales or frrequencies. In
n [8] intra-
sccale refers to
o fine-tuning inside the reespective resoolution and inter-scale
i to
o using the
reesult from preecedent resoluutions. The method
m beginss with a smalll resolution reepresenting
thhe base clariffied image for
fo the next bigger
b resoluution. Then it i estimates the
t greater
reesolution by an iterative Joint Bilateeral Richardsson-Lucy decconvolution. The edge
deetections resuulted from thee coarser resoolution imagee is the base for
f better acccurate edge
deetection in thhe finer resoolution imagee. A regularizzation approach removes undesired
arrtifacts in uniiform zones. More than thhis, using thee smaller resoolution as a guide
g and a
reesidual deconnvolution algoorithm, an immportant amouunt of details can be recov vered. This
method entirely eliminatess ringing annd, moreover, it generatees a sharp im
m mage with
unnimportant texture loss.

23
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

2.3.2. Outliers handling


The mathematical model presented in the regularization techniques takes into account
only Gaussian additive noise but there are also present other aberrations that can unsettle
the convolved image. Bright spots with intensities greater than what the image format can
hold, yet they are clipped. Clipping, along with dead pixels or hot pixels, is not taken into
consideration in the original theoretical model. Color curves represent other influences
implemented by software in order to capture an image more similar to what the eye can
see. The first step is to clear the color curve by applying a gamma correction, this way the
colors vary linearly. The characteristics of the camera can be measured photographing
printed targets [33] or read the camera manual if available, otherwise, the common PC 2.2
decoding gamma is used. Then, the algorithm of the outliers’ elimination is able to
separate pixels that respect the model from those that may be possible errors (the
saturated and the dark pixels). An approach of Expectation Maximization fills then the
areas where pixels have been removed. [10]
This model has the ability to clear the repetitive and wave-like artifacts that derive from
software truncations and hardware errors. Also, it generates fewer rings caused by non-
linear color transformations.

2.3.3. Noise reduction


A situation when simple regularization techniques fail is when blurred images have a
significant amount of noise (the signal source of space telescopes is very far away,
medical imaging uses a small amount of radiation in order to minimize the impact on the
patient). This means that noise can be compared to the signal power.
Wohlberg and Rodrigues developed a model which deals only with impulse noise. [16]
The solution of this mathematical model is a modified Total Variance (TV) regularization
(this generates an image with the smallest variations between the pixels that still follow
the original shape of the signal). The variance is defined as follows:
λ
(Dxu )2 + (D y u )2
p
where D is the derivative and λ represents the power of the
q p

filter. The measure how generated signal approximates the original one is the p norm of:

1 p
Ku − B p
where K is a linear operator of the forward problem and B represents the
q
altered signal.
These functions are modified for the purpose to accommodate pixels that fall over or
below a threshold. The result is locating and clearing salt and pepper noise.
In [6] a similar but faster technique for impulse noise and Gaussian noise is used.
In [17] the authors use the idea that most natural images has most derivatives around 0,
and use a sparse before that opts to concentrate the derivatives at a small number of
pixels, leaving the rest almost untouched in the deconvolution process. [17] In this

24
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

manner, the image has sharp edges, less noise, smaller ringing artifacts, yet fine texture
details are lost in the process of convolution.
Instead of trying to generate a deconvolution model that includes noise, in [12] the clear
image is reconstructed from the very noisy deblurred image. This implies very powerful
filtration by means of 26 orientate complex wavelet packets.

3. RECENT ADVANCES CONCLUSIONS

During the past few years, this domain evolved to the point of actually constructing useful
everyday applications, like introducing special camera aperture [20] or coded camera
exposures [19].
These results can help in other connected domains, like super resolution (by removing the
blur that inherently is generated during the combination of multiple images [32][18], or
by achieving more information from the larger space that has been occupied by the
moving object on the image [19]).
The recent advances are presented in other papers addressing just these issues: [35][37].

3.1. Non-blind deconvolution method conclusions

The inverse filtering method will be used as non-blind deconvolution, as it offers similar
results to other regularized methods and is easy to implement.

3.2. Point Spread Function estimation conclusions

Present blind deconvolution methods rely on assumptions regarding the image type in
order to work. The three main directions include:
• Iterative deconvolution based upon Fergus's work, that goes through image scales
and estimates the kernel in order to obtain an as close as possible gradient
distribution to a reference considered an ideal natural image.
• Iterative deconvolution based on the Richardson-Lucy method with the
minimization of total variation. The assumption here is that the resultant image
should have small energy and variation because the high energy part is given by
the noise induced.
• Cepstrum geometrical analysis methods that try to find as much as possible from
the kernel from a heavy distorted signal similar to it observed in the Fourier of the
logarithm of the amplitude of the Fourier of the signal. The assumption here is
that the kernel has a simple filiform shape.
In the second part of the paper, we start developing a technique for recovering the blurring
kernel in the cepstral domain. Unlike the presented cepstral domain method, we aim at
recovering any shape for the kernel and unlike other methods that rely on presumptions,
the only presumption we make is the natural one, that the kernel is the most repetitive
structure on the image. PSF filtering methods will be introduced and results for artificial
and natural images will be offered.

25
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

REFERENCES

[1] Ben-Ezra M., Nayar S. K., “Motion-Based Motion Deblurring, Pattern Analysis
and Machine Intelligence”, IEEE Transactions, Volume 26, June 2004, Issue 6, pp.
689 – 698
[2] Jian-Feng Caia, Hui Jib, Chaoqiang Liua, Zuowei Shenb, “Blind Motion
Deblurring Using Multiple Images”, Journal of Computational Physics, Volume
228 Issue 14, August 2009, pp. 5057-5071
[3] Lu Yuan, Jian Sun, Long Quan, Heung-Yeung Shum, “Image Deblurring With
Blurred/Noisy Image Pairs”, ACM Transactions on Graphics (TOG) - Proceedings
of ACM SIGGRAPH 2007 TOG Homepage, Volume 26 Issue 3, July 2007, Article
No. 1
[4] Shengyang Dai, Ying Wu, “Motion From Blur”, Computer Vision and Pattern
Recognition, 2008. CVPR 2008. IEEE Conference, 23-28 June 2008, pp. 1 - 8
[5] Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis, William T. Freeman,
“Removing Camera Shake From A Single Photograph”, ACM Transactions on
Graphics (TOG) - Proceedings of ACM SIGGRAPH 2006 TOG Homepage,
Volume 25 Issue 3, July 2006, pp. 787 - 794
[6] Li Xu, Jiaya Jia, “Two-Phase Kernel Estimation For Robust Motion Deblurring”,
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I,
pp. 157-170
[7] Qi Shan, Jiaya Jia, Aseem Agarwala, “High-Quality Motion Deblurring From A
Single Image”, ACM Transactions on Graphics (TOG) - Proceedings of ACM
SIGGRAPH 2008 TOG Homepage, Volume 27 Issue 3, August 2008, Article No. 73
[8] Lu Yuan, Jian Sun, Long Quan, Heung-Yeung Shum, “Progressive Inter-Scale And
Intra-Scale Non-Blind Image Deconvolution”, ACM Transactions on Graphics
(TOG) - Proceedings of ACM SIGGRAPH 2008 TOG Homepage, Volume 27 Issue
3, August 2008, Article No. 74
[9] Le Zouy, Howard Zhouz, Samuel Chengx, Chuan Heyy, “Dual Range Deringing
For Non-Blind Image Deconvolution”, Image Processing (ICIP), 2010 17th IEEE
International Conference, 26-29 Sept. 2010, pp. 1701 – 1704
[10] Sunghyun Cho, Jue Wang, Seungyong Lee, “Handling Outliers In Non-Blind Image
Deconvolution”, Computer Vision (ICCV), 2011 IEEE International Conference, 6-
13 Nov. 2011, pp. 495 - 502
[11] Jong-Ho Lee, Yo-Sung Ho, “Non-Blind Image Deconvolution With Adaptive
Regularization”, PCM'10 Proceedings of the 11th Pacific Rim conference on
Advances in multimedia information processing: Part I, Pages 719-730
[12] André Jalobeanu, Laure Blanc-Féraud, Josiane Zerubia, “Satellite Image
Deconvolution Using Complex Wavelet Packets”, Image Processing, 2000.
Proceedings. 2000 International Conference, 2000, Vol. 3, pp. 809 - 812

26
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[13] Qi Shan, Wei Xiong, And Jiaya Jia, “Rotational Motion Deblurring of a Rigid
Object from a Single Image”, Computer Vision, 2007. ICCV 2007. IEEE 11th
International Conference, pp. 1 – 8
[14] Anat Levin, Alex Rav-Acha, Dani Lischinski, “Spectral Matting, Computer Vision
and Pattern Recognition”, 2007. CVPR '07. IEEE Conference, June 2007, pp. 1–8
[15] Sunghyun Cho, Yasuyuki Matsushita, Seungyong Lee, “Removing Non-Uniform
Motion Blur from Images”, Computer Vision, 2007. ICCV 2007. IEEE 11th
International Conference, 14-21 Oct. 2007 pp. 1 - 8
[16] Brendt Wohlberg, Paul Rodriguez, “An L1-Tv Algorithm For Deconvolution With
Salt And Pepper Noise”, ICASSP '09 Proceedings of the 2009 IEEE International
Conference on Acoustics, Speech and Signal Processing, pp. 1257-1260
[17] Anat Levin, Rob Fergus, Fredo Durand, William T. Freeman, “Deconvolution
Using Natural Image Priors”, ACM Trans. Graphics, Vol. 26, No. 3. (2007)
[18] Michael Irani, Shmuel Peleg, “Super Resolution From Image Sequences”, 10th
ICPR, Vol. 2, Jun 1990, pp. 115-120
[19] Amit Agrawal, Ramesh Raskar, “Resolving Objects At Higher Resolution from A
Single Motion-Blurred Image”, Computer Vision and Pattern Recognition, 2007.
CVPR '07. IEEE Conference, 17-22 June 2007, pp. 1 - 8
[20] Anat Levin, Rob Fergus, Fredo Durand, William T. Freeman, “Image And Depth
From A Conventional Camera With A Coded Aperture”, ACM Transactions on
Graphics (TOG) - Proceedings of ACM SIGGRAPH 2007 TOG Homepage, Vol.
26, Issue 3, July 2007, Article No. 70
[21] http:// en.wikipedia.org/ wiki/ Convolution, accessed 30.04.2015
[22] http:// en.wikipedia.org/ wiki/ Richardson%E2%80%93Lucy_deconvolution,
accessed 30.04.2015
[23] http:// www.owlnet.rice.edu/ ~elec539/ Projects99/ BACH/ proj2/ inverse.html,
accessed 30.04.2015
[24] Yuji Oyamada, Haruka Asa, Hideo Saito, “Blind Deconvolution for a Curved
Motion Based on Cepstral Analysis”, IPSJ Transactions on Computer Vision and
Applications Vol. 3 (2011) pp. 32-43
[25] Oliver Whyte, Josef Sivic, Andrew Zisserman, Jean Ponce, “Non-uniform
Deblurring for Shaken Images”, Computer Vision and Pattern Recognition
(CVPR), 2010 IEEE Conference, pp. 491 - 498
[26] Eliyahu Osherovich, Michael Zibulevsky, Irad Yavneh, Signal Reconstruction From
The Modulus of its Fourier Transform, 24/12/2008 Technion
[27] J. P. Lewis, Fast Normalized Cross-Correlation, Industrial Light & Magic
[28] http:// www.owlnet.rice.edu/ ~elec539/ Projects99/ BACH/ proj2/ wiener.html,
accessed 30.04.2015

27
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[29] Costin-Anton Boiangiu, Paul Boglis, Georgiana Simion, Radu Ioanitescu, "Voting-
Based Layout Analysis", Journal of Information Systems & Operations
Management (JISOM), Vol. 8, No. 1 / June 2014, pp. 39-47
[30] Costin-Anton Boiangiu, Radu Ioanitescu, “Voting-Based Image Segmentation”,
Journal of Information Systems & Operations Management (JISOM), Vol. 7, No. 2
/ December 2013, pp. 211-220
[31] Costin-Anton Boiangiu, Mihai Simion, Vlad Lionte, Zaharescu Mihai – “Voting-
Based Image Binarization”, Journal of Information Systems & Operations
Management (JISOM), Vol. 8, No. 2 / December 2014, pp. 343-351
[32] Florin Manaila, Costin-Anton Boiangiu, Ion BucuR – “Super Resolution From
Multiple Low Resolution Images”, Journal of Information Systems & Operations
Management (JISOM), Vol. 8, No. 2 / December 2014, pp. 316-322
[33] Costin-Anton Boiangiu, Alexandru Victor Ştefănescu, “Target validation and image
calibration in scanning systems”, in Proceedings of the 1st International
Conference on Image Processing and Pattern Recognition (IPPR '13), Budapest,
Hungary, December 10-12, 2013, pp. 72-78
[34] M. Pollefeys, M. Vergauwen, F. Verbiest, K. Cornelis, L. Van Gool, “From image
sequences to 3D models”, Proc. Automatic Extraction of Man-Made Objects From
Aerial and Space Images (III), pp.403-410, 2001
[35] Mihai Zaharescu, Costin-Anton Boiangiu, “An Investigation of Image Deblurring
Techniques”, International Journal Of Mathematical Models And Methods In
Applied Sciences, Vol. 8, 2014
[36] Mihai Zaharescu, “Deblurring-ul unei singure imagini”, Master Thesis,
Unpublished Work, Bucharest, Romania, 2013.
[37] Mihai Zaharescu, Costin-Anton Boiangiu, “Image deblurring: challenges and
solutions,” in Proceedings of the 12th International Conference on Circuits,
Systems, Electronics, Control & Signal Processing (CSECS '13), Budapest,
Hungary, December 10-12, 2013, pp. 187-196.

28
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

FACILITIES AND CHANGES IN THE EDUCATIONAL PROCESS


WHEN USING OFFICE365

George Căruţaşu 1*
Mironela Pirnau 2

ABSTRACT

Microsoft Office 365 is the most appropriate set of software tools for the educational and
business processes, having multiple facilities in point of email, managing agenda, contact
management, videoconferences, editing, storing and sharing documents. With the
transition to Office 365, it ocured not only a change in technology but also a change of
the pedagogical methods by means of technology. Office 365 for Education provides the
neessary support for developing the educational processes, including conducting lessons
without paper, using the OneNote facility. In the present, the educational activities on the
Office 365 platform contain basic facilities of the educational process, by means of the
colaboration and communication between SharePoint and Lync. This Office 365 platform
functions in the cloud, with permanent access, from any device, through the Internet, and
all the important tools needed for managing an efficient educational process become very
easy to manage and exploit.

KEYWORDS: Educations, Office365, Cloud, Powershell

INTRODUCTION

Office 365 education platform contains many features that help academic background and
the major operations this platform provides are:
• Adds new users at Office 365 domain.
• Assigns licenses and permits from Office 365 domain to users.
• Defining the details to connect to it and reset possibilities of passwords
• Configures SharePoint sites for administrative activities, for collaborative work in
teams to achieve various projects, for learning, for different subjects and for
creating personal websites.
• Possibility of editing and improvement of various elements of the sites
• Setting up a simple website and the possibility to be made visible to the public.
• Possibility to track groups, individuals and documents.
• Configuration of applications for SharePoint sites.

1
* corresponding author, Professor PhD, Faculty of Computer Science for Business Management, Romanian -
American University, Bucharest, carutasu.george@profesor.rau.ro
2
Associate Professor, PhD., Titu Maiorescu University, Bucharest, mironela.pirnau@utm.ro

29
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

• Configuration of team sites and allocate tasks to the team members using
SharePoint applications.
As main benefits of Office 365 we have identified: access authenticated based on user and
password information from anywhere and at anytime to the information assigned by the
IT administrator [1-3]; Office use in any browser from your smart-phone or tablet; no
matter what device the user is working on as the content, information and settings are the
same; Working simultaneously on a document with multiple users without occurrence of
content conflicts; users can work without Internet connection, with the Office package
installed locally and then modified documents will automatically synchronize at return of
Internet connection; immediate blocking of mobile devices or tablets in case of theft,
damage or loss. In Table 1 we introduced the main facilities for applications in Office 365
package and their role in education [4-5].
Table 1. The main elements included in Office 365 Suite of Applications

Provides a superior e-mail also containing calendar and


contacts with a 50 GB in-box.

Allows you organize and share work by collecting of


course materials in one digital notebook. You can also
perform interactive content to easily collaborate with
students.
Allows you to create professional-looking
models/projects using text, images, videos which can be
shared then.
Teachers and students get online versions of Word,
PowerPoint, Excel and OneNote. Also there is the
possibility to provide them full Office applications that
can be installed on up to 5 PC or Mac for free.
It provides OneDrive storage space of1TB to share
homework, provide feedback and work together
students/teachers in real time using also mobile devices.
Making online courses and other events with audio and
video content, on any device, enabling screen sharing.

PowerPoint presentations, Word documents and Excel,


PDFs and Sway courts can be grouped into a unique
profile and then shared with colleagues and students

Delve allows to obtain customized content from Office


365 and shows you how teachers interact with documents
and with each other.

30
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Planner facilitates crreation and sharing lesso on plans,


organizinng and assignning themes inn class and taalks about
things yoou work on.
Work wiith teachers aand support staff
s in projeects using
team sitees that help yyou keep relaated documen nts, notes,
tasks, andd conversationns organized together.
Tools likke Yammer hhelp classmaates and instrructors to
remain connected aat all timess, encouragiing open
communiication and deeepening notiions.
Office 365 Video iss a companyy-wide destin nation for
uploadingg, sharing aand discoverring video files and
smooth playback
p on alll devices.

Inn practice, fo
or managing an Office 3665 solution shhould be accoomplished thee following
taasks: adding and managging users; Outlook and Exchange Online maanagement;
ShharePoint On nline adminiistration; pubblic site maanagement company; Ly ync Online
addministration;; Polycom's Lync-integrati
L ion; Office 3665 subscriptioon managemeent [6].

PR
RACTICAL
L STUDY FO
OR USE OF GROUPS
G IN
N OFFICE 3665

O
Office 365 plattform users can be manageed by using security groupps, groups at share-point
s
leevel and distrribution grouups. Administtration of theese groups caan be made both
b at the
innterface Officce 365 (Figuure 1) using ADManagerr Plus, whichh helps to define d new
acccounts for ussers of Officee 365 (based on
o Azure Acttive Directoryy) through a single
s web-
baased console or through PoowerShell scrripts [4-8].

Figure 1. Creating grouups by using O


Office365 interfface

Thhe main PowerShell comm


mands for usinng groups aree displayed inn Table 2.

31
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 2. PowerShell commands for groups [8]

PowerShell commands Role of these commands

Get-DistributionGroup Display the members of a distribution group.

New-DistributionGroup Create a distribution group.

Remove-DistributionGroup Delete a distribution group.

Change the properties of an existing distribution


Set-DistributionGroup
group.
Add-
Add a recipient to an existing distribution group.
DistributionGroupMember
Get-
View the members of an existing distribution group.
DistributionGroupMember
Remove- Delete a recipient from the membership of a
DistributionGroupMember distribution group.
View all distribution groups, security groups, and role
Get-Group
groups in your organization.
Add-MsolGroupMember cmdlet is used to add
Add-MsolGroupMember members to a security group. New members can be
either users or other security groups.
Cmdlet Get-MsolGroup is used to retrieve Azure AD
Get-MsolGroup groups. This cmdlet can be used to return to a single
group (if passed ObjectId), or to search in all groups.
Cmdlet Get-MsolGroupMember is used to extract the
Get-MsolGroupMember specified group members. Members can be either users
or groups.
Cmdlet New-MsolGroup is used to add a new
New-MsolGroup
security group

Remove-MsolGroup Allows to delate a group

Remove-MsolGroupMember For eliminating a member from a security group

Cmdlet-ul Set-MsolGroup is used to update the


Set-MsolGroup
properties of a security group.

As a case study for work with PowerShell commands on accounts of Office 365 you we
will pass through the commands below in the following order:

32
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

• Will launch the Windows PowerShell application;


• Will run the $LiveCred = Get-Credential to connect to a management account of
the users in an organization. Following this command in the window in Figure 2
are defined the elements of authentication.

Figure 2. Credentials for authentication

Î $Session = New-PSSession -ConfigurationName Microsoft.Exchange -


ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -
Authentication Basic –AllowRedirection
Î Import-PSSession $Session
Î Get-ExecutionPolicy
Î Set-ExecutionPolicy Unrestricted
Î We will create a distribution group with New-DistributionGroup -Name
"Grupa601INFO" command
Î If the users that will added in an external file with CSV extension of the form
Display name, Alias, E-mail address then the command that adds users to the
previous created group is:
Import-Csv "c:\csv\csv.txt" | foreach{Add-DistributionGroupMember -Identity
"Grupa601INFO" -Member $_.alias}
Î Example to display the members of a distribution group: Get-
DistributionGroupMember –Identity “GrupaD103INFO”
Î Display of distribution groups: Get-DistributionGroup
Î Creation of a Distribution Group is achieved:
1) New-DistributionGroup -Name "Grupa602INFO"-Type Security
Î Remove a Distribution Group:
Remove-DistributionGroup "Grupa601INFO"

33
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

O
ONENOTE, CLASS NOTEBOOK AND SHA
AREPOINT
T FACILITIIES FOR
E
EDUCATION
N

For communiccation and eassy sharing of data and infoormation it is recommended


r d that users
bee grouped. OneNote
O is a good appllication with a role in organizing
o reesources in
nootebooks, secctions and paages and alsoo for teacherr-student colllaborative wo ork. In this
reegard, accord
ding to Figuree 3 OneNote online has twwo sections: Teacher
T and Student [9-
100].

Figure 3.
3 OneNote Onnline

T
To highlight th w created secctions for whhich we defined templates in order to
his feature, we
asssist studentss in preparinng the practiical applications of theorretical notion
ns and fast
leearning. Oncee templates have
h been creeated students can study them
t in order to realize
thheir own projects/portfolioos, which cann then be sharred both with teachers and
d with other
coolleagues.
M
Moreover, if wew use a loccally installedd version off OneNote, you can add multimedia
m
ellements, auddio and videoo recordings for completed projects. This is achiieved since
O
OneNote synchronize betw ween the instaalled version oof OneNote and
a OneNote Online, so
alll changes/ad
dditions are updated
u in reeal time, provvided that there may be multimedia
m
ellements that cannot be used with OneN Note Online. To highlightt the OneNotte facilities,
w
we created exaamples of Secctions and Tem mplates, as shhown in Figurre 4.

344
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

F
Figure 4. Case--study- OneNoote Sections

Shharing these resources is achieved by assigning rigghts to edit or


o change of the shared
reesources, as sh
hown in Figuure 5.

Figuure 5. Window for sharing grooups and userss

35
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

C
Class NoteBook applicatioon allows a teeacher to set up a worksppace for each h student, a
library of content and a colllaborative spaace for classes and activitiees, all in one notebook.
With the help of this tool, teachers can easily add stuudents in a shhared notepad
W d, to enable
diifferentiated teaching, conntent deliveryy and digital collaboration space wheere students
caan work togetther. Also, teachers providde them feedbback in real tiime. To undeerstand how
too use theese facilitiees is creeated a ssite that can be found at
htttp://onenoteiineducation.com/ro-RO. The T applicatiion is easy to use and allows the
foollowing stepps: choosing thhe name of notebook;
n dispplay of sharedd sections in notepad, to
diistribute mateerials and putt students to work
w collaborratively; addiing associate professors;
addding studen nts; setting sections for students;
s acccording to thhe above discussion on
crreated worksp paces (Figuree 6) can be added students or teachers.

F
Figure 6. Add students at class notebook

SharePoint iss the system behind the web w Microsoft sites and offers
o studen
nts multiple
oppportunities to create onlline and offlline web conntent for infoormation and individual
w
work, organizzing workingg groups andd managemennt of relatedd documents,, following
veersion of do ocuments, bllogs and othher elementss necessary for learning g activities.
SharePoint pro ovides perfecct integrationn with other Microsoft teechnologies available
a in
O
Office 365.
A
As a key part of Office 3655, SharePointt offers facilitties for informmation managgement and
coollaboration that
t are integgrated in Offiice applicatioons, which teachers and sttudents use
evvery day to crreate and devvelop documennts, coordinatte teamwork, and make deecisions.
T
The new unified access sysstem and inteelligent searchh for all Officce 365 docum
ments make
eaasier creation
n, sharing andd collaboratioon from anywwhere, and onn any devicee. Either by
onne of the Offffice 365 mobbile applicatioons or directly from the browser,
b now
w important
fiiles will alwaays be at yoour fingertipss with the neew search teechnology an nd selective
syynchronizatioon between PCC, cloud and mobile
m devices.

366
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

SharePoint mobile application developed for Windows, iOS and Android, brings intranet
functionality on mobile devices for use regardless of location and type of implementation.
Home SharePoint (Figure 7) facilitates visualization and unified access to all sites and
activities of a user, creating an Internet experience and efficient use of all resources, both
internal and in cloud [12].

Figure 7. SharePoint home page (https://staffrau.sharepoint.com/SitePages/Home.aspx)

SharePoint brings conditional smart access policies that automatically adapts to working
context - based on identity, location or device used. Algorithms for protection against data
loss (DLP - Data Loss Protection), integrated into SharePoint and Office 365 fallow
defining important data and automated access or transfer policies of them. At any time
resources and applications used in Office 365 platform for each user/student/teacher can
be monitored, as shown in Figure 8. Moreover, all these reports, web pages are
synchronized on your PC, tablet or smart-phone regardless of operating system of the
equipment used [11].

Figure 8. Active users

37
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In practice, a SharePoint Online administrator can perform the following activities:


• creating, deleting and managing websites collections;
• allotting and monitoring the storage space of the websites collection;
• granting access for the website collection administrators;
• configuring the implicit SharePoint site;
• planning multi-languages sites;
• managing user profiles;
• planning and managing characteristics such as managed metadata;
• activating or deactivating external sharing.
At the same time, users can be added in the team website and permissions for users can be
adjusted, in this way levels of permission and the membership to a SharePoint being
established. A group is a set of users defined by the level of website collection for an easy
management of permissions.
Each group is given an implicit level of permission. A site owner has the total control
level of permission (Figure 9) and his/her main responsibilities are:
• creating and managing websites;
• managing the websites characteristics and setting, such as style and appearance;
• saving a website as a template;
• managing the website columns and the content site types;
• deleting a site;
• adjusting the regional and linguistic settings;
• adding applications in websites;

Figure 9. Permission levels

38
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Also, Team Site can communicate effectively with the student groups, organized by study
levels (Figure 10).

Figure 10. The Folder Structure on the Team Site level.

Users who access Team Site are managed in the SharePoint Group, for which permissions
are defined in the Figure 11.

Figure 11. SharePoint Group Permissions

39
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

CONCLUSIONS

Office 365 is a cloud type service offered by Microsoft, hosted on Microsoft's servers and
academic institutions can benefit from Office 365 Education for free or can upgrade to
complex features with a significant price reduction. Office 365 plans for education
include many features such as agreements of guaranteed service level of 99.9%; IT and
web support and non-stop phone support for critical issues; active Directory integration to
easily manage user credentials and permissions; global data security and access from
anywhere and from any device (PC, tablet, smart-phone) to Office 365 applications and
tools for students and teachers.
Simultaneous access for students and teachers to the same documents, reports, projects,
presentations. The possibility to connect teachers and students through Lync Online and
SharePoint Online and HD audio-video conferencing on PC, tablet or smart-phone lead to
an education system that meets the highest educational standards [12].

REFERENCES

[1] M. Katzer, D Crawford - Using Office 365 and Windows Intune, 2013 - Springer
[2] K. Murray, Microsoft Office 365: Connect and Collaborate Virtually Anywhere,
Anytime, 1st ed. Microsoft Press, 2011
[3] S. Krishnan, Programming Windows Azure, O'Reilly Media ISBN: 978-0-596-
80197-7
[4] G. Carutasu, M. Botezatu., C. Botezatu, M. Pirnau, Cloud Computing and Windows
Azure, ECAI 2016, 8th Edition
[5] X. Song, Y. Ma, and D. Teng, “A load balancing scheme using federate migration
based on virtual machines for cloud simulations,” Mathematical Problems in
Engineering, vol. 2015, Article ID506432, 11 pages, 2015.
[6] E. Cayirci, “Modeling and simulation as a cloud service: a survey,” in Proceedings
of the Winter Simulation Conference, pp. 389–400, IEEE, Washington, DC, USA,
December 2013
[7] N. L. Căruţaşu, G. Căruţaşu Cloud ERP implementation, FAIMA Business &
Management Journal, Vol.4 Issue 1/2016, pp. 31-43, 2016, ISSN 2344-4088.
[8] https://msdn.microsoft.com/en-us/library/azure/dn919663.aspx
[9] J. Li, M. Humphrey, D. Agarwal, K. Jackson, C. van Ingen, and Y. Ryu, "eScience
in the cloud: A MODIS satellite data reprojection and reduction pipeline in the
Windows Azure platform, " in Parallel Distributed Processing (IPDPS), 2010 IEEE
International Symposium on, april 2010, pp. 1 - 10.
[10] E. Pluzhnik, E. Nikulchev, S. Payain. (2014) “Optimal Control of Applications for
Hybrid Cloud Services” Proc. 2014 IEEE World Congress on Services (SERVICES
2014)

40
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[11] Barca Cristian, Barca Claudiu, Cucu Cristian, Gavriloaia Mariuca-Roxana,


Vizireanu Radu, Fratu Octavian , Halunga Simona - A Virtual Cloud Computing
Provider for Mobile Devices. Proceedings of the International Conference
ECAI¬2016
[12] https://products.office.com/en-gb/academic/compare-office-365-education-plans

41
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

USING CLOUDS FOR FPGA DEVELOPMENT –


A COMMERCIAL PERSPECTIVE

Laurenţiu A. Dumitru 1*
Sergiu Eftimie 2
Ciprian Răcuciu 3

ABSTRACT

Field Programmable Gate Arrays (FPGA) are electronic devices that can be reconfigured
at runtime. Due to the fact that they implement a small number of dedicated functions,
FPGAs are used for hardware acceleration, alongside with general purpose processors.
Several vendors provide different Integrated Development Environments, but all of them
support the standard VHDL and Verilog hardware description languages. After the
development phase, implementing an FPGA design can be a time-consuming and cpu-
intensive task. The current paper examines existing technical solutions that provide build
parallelism at high speeds, as opposed to workstation-local building, and tries to estimate
at what point migrating towards a third party justifies the costs.

KEYWORDS: FPGA development, FPGA synthesis in clouds

1. INTRODUCTION

Field Programmable Gate Arrays are now used in more and more environments, due to
their versatility and performance. From real-time tasks such as feedback and control in
automotive and aviation applications to more general-purpose such as enabling IOT
connectivity, FPGAs bridge the gap between flexibility and hardware-implemented
algorithms. Traditional software, executed on a normal microprocessor, runs sequentially,
while algorithms executed on FPGA hardware run in parallel. Furthermore, FPGAs
interact directly with the external environment through input/output pins of various
physical characteristic. This capability makes them perfect for multiple sensor acquisition
at very high speeds.
In normal computing systems, FPGAs can be found in many places such as North and
South Bridges, network cards or dedicated accelerators. Since FPGA chips can be
reprogrammed as the need occurs, many vendors have chosen to use FPGA over
traditional ASICs (Application Specific Integrated Circuit) due to easier firmware
1
* corresponding author, PhD. Cand., Military Technical Academy, 39-49 George Coşbuc Bvd., Bucharest,
Romania, dlaur@nipne.ro
2
PhD. Cand., Military Technical Academy, 39-49 George Coşbuc Bvd., Bucharest, Romania,
sergiu.eftimie@gmail.com
3
Univ. Prof. PhD., Military Technical Academy, 39-49 George Coşbuc Bvd., Bucharest, Romania,
ciprian.racuciu@gmail.com

42
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

upgrade and lower time-to-market. Evolution of fabrication techniques has permitted


higher operating frequencies that were not available in the past. The 16nm fabrication
process and technologies such as 3D IC offer higher interconnection speeds, larger
reprogrammable areas and lower power consumption.
System-On-A-Chip architectures usually combine a normal microprocessor, such as ARM
Cortex, with reprogrammable logic, such as an FPGA chip. The hardware microprocessor
runs at a higher frequency than the FPGA, thus is capable of running modern operating
systems without sacrificing performance, while the FPGA has the flexibility to implement
various interfaces for outside communication. Such an example is the Xilinx Zynq product
family. SoC designs are found in many consumer electronics that are IOT-capable.
Modern FPGA families have multiple capabilities such as radiation tolerance, integrity
checking, error correction and partial reconfiguration. Such abilities make them suitable
for mission-critical environments the error toleration rate is extremely low. In the past,
FPGAs did not provide any technical solutions for upgrading a device without rebooting it
but now, partial reconfiguration can be used for in-field software upgrades without the
need of restarting the device. Such a feature can prove to be very useful in set-ups where
high availability is needed. Multi-gigabit transceivers that can connect to many mediums
have boosted the use of FPGAs in consumer-grade devices. The development for FPGA is
usually done in a Hardware Definition Language (HDL), such as VHDL or Verilog, with
the help of an Integrated Development Environment provided by the chip's vendor.
Most of these IDE's provide Software Development Kits that ease the deployment of
software stacks on top of hardware or software-implemented processors. The
development process of an embedded system can be split in hardware development and
software development. Hardware development refers to the architecture implemented in
the FPGA chip, not including the electronic part, but including any HDL-related code,
while software development refers to applications written in high-level languages that are
to be executed on the microprocessors available on the hardware design.
The software development of an embedded system usually refers to the programming the
soft-core or physical microprocessor units installed. It can be done in ultra-low level
languages, such as assembly, or with more developer-friendly alternatives such as C/C++.
There are not so many IDEs that support just-in-time languages due to the necessity of an
underlying translation unit, which, on embedded systems, can occupy resources and
consume more power without giving back any benefits. For solutions that support
multiple microprocessor architectures a programming language that can easily be ported
to many targets is wanted. During the development phase, virtual environments that
simulate the target architecture can be used to validate a specific code, before trying it on
the development boards.
The development and testing of the hardware architecture is a process that can be time-
consuming. The time interval of generating the final product of the design, the bit stream
that is to be programmed in hardware, varies with the size of the FPGA, the complexity of
the HDL code and the speed of the system on which the process is taking place.
Functional validation is done by simulating the design, but, in many cases, actual
validation can take place only when the hardware is programmed, probes are placed in

43
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

sppecific parts and


a physical test benches evaluate the overall behaaviour. Such an a example
iss the developpment of a syystem that haas PCIe connnectivity. It can c be fully tested only
w
when the card is placed in an
a actual systeem, the drivers are active anda data flow w can occur.
Iff the desired behaviour iss not achieveed, an error ooccurs or it justj does nott work, the
syystem engineer must start the
t whole proocess all overr again. Repeaating this process can be
paainful given the fact that it can take from
f a few teens of minutees to tens of hours. If a
coompany is on nly occasionallly building such
s systems, it can be asssumed that at best only a
feew workstatiions are avaailable for thhe developmeent. It is obbvious how these high
immplementatio on times can affect
a a speciffic time framee inside a projject's workflo
ow.
W
While there arre methods off speeding up the implementation, mostt of them com me at a cost.
C
Companies thaat have FPG GA as a primary market hhave their ow wn dedicated computing
faarms for suchh situation. Otther companies, that do froom time to tim
me FPGA deevelopment,
caannot justify the implemeentation costss of high perfformance sysstems. Alternaatives exist
foor these commpanies. This paper analyses how a F FPGA implem mentation proocesses can
beenefit from coomputing cloouds, what aree the technicaal requiremennts and what costs
c can be
exxpected. The proposed technical approaach is not the only one thatt could be imp plemented.

2.. RELATED
D WORK

Since FPGA developmentt is still connsidered inacccessible to the average consumer-


orriented comp pany due to high man-ppower costs, high techniccal costs and d increased
teechnical commplexity theree are not maany viable reemote building/compiling g solutions.
H
However, larg ge companies such as Intell and Microsoft have alreaady massively deployed
FPGAs in th heir cloud prroducts. Whhen FPGA w will be endoorsing more and more
teechnologies, more
m alternattives will be added to thee few build options
o availlable to the
cuurrent design flow.

F
Figure 1. NI Coompile Cloud A
Architecture

444
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

LaabVIEW offeers the “NI LabVIEW


L FPPGA Compilee Cloud Serviice” for its cllients. It is
deescribed on their whitepaaper as provviding shorteer compile tiimes enabled d by high-
peerformance Linux-based
L servers in thhe cloud, immproved produuctivity by performing
p
m
multiple compiiles at a time,, in parallel, and
a the conveenience of being able to po ower-down
yoour PC at any y time duringg a compile. InI [Fig. 1] wee can observee the architecture, based
onn a client-serv
vice model, as described byy the vendor.
Suuch a solution
n is, unfortunnately, only avvailable for ccustomers thatt use their tecchnologies.
H
However, baseed on the architectural moodel, one cann envision a general
g appro oach of the
saame workfloww.
Trrying to addrress the same gap, “Plunifyy”, a Singapoore-based com mpany foundeed in 2009,
deevelops a clooud platform that enables semiconductoor chip desiggners to shortten product
tim
me to markett and reduce developmentt costs. Pluniify offers its products as add-ons to
exxisting IDEs provided by various FPG GA vendors. InTime, the product thatt addresses
opptimization products
p andd timing clossures by meeans of machhine learning g and raw
coomputing pow wer, supports Xilinx's ISE E and Vivado and Altera's Quartus. It hash various
ruun targets: lo
ocal systems,, private clouuds and public clouds. Since
S its laun
nch, it has
deeveloped seveeral interfacees that allow a developer tto run multipple scenarios in parallel,
m
manage compu uting resource, do scheduuling and variious other tassks. Regressiion testing,
deesign optimizzation and resoource manageement are also available.
D
During the pro ocess of deveeloping an FPGA
F design, a series of compilation cycles are
neeeded. Plunifyfy also offers tools for reggression testinng and benchmarking that are highly
im
mportant in thhese cases andd help to analyyse if a featurre has been afffected or bro
oken due to
deesign changess. Automationn also plays an a important role in the whole
w processs because it
caan identify flaaws with miniimal effort inn case of last m
minute changes, for examp ple.

Figuure 2. PLUNIFY
Y EDAxtend C
Cloud Platform
m

ED DAxtend is Plunify's
P clouud platform thhat can run inn public and private cloud
ds and uses
s that enginneering teamss can, withouut having to learn new
thhe existing deesign tools so

45
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

methodologies, harvest the power or large computing farms. API and script-based access
is available so that both interactive and automated build scenarios are supported.
Communication to and from the cloud platform is secured with VPNs, SSL/TLS and other
techniques.
For large development teams and multiple projects, PLUNIFY's platform and tools are a
major improvement from having to manage such in-house resources. For others, the TCO
might still not justify the use of such a platform.
Standard cloud services, where one can run a Virtual Private Server, can be used to
overcome the limitations of a small number of workstations. The following section
proposes such an approach.
Amazon has also launched its FPGA service in 2016 called F1 [Fig. 3] that uses field-
programmable gate arrays. The new instances are planned to become generally available
during 2017. The company motivates their service offering in the increasing affordability
of the FPGAs and the fact that they have become easier to program, opening the way to
their use into a wide area of services. The increase availability in the cloud is believed to
motivate the developers to start experimenting with them.
4K video processing and imaging, as well as machine learning are considered suitable
candidates for FPGA development.
NGCodec is a company that worked with Amazon in order to test the new F1 instances.
NGCodec implemented its product called RealityCodec for VR/AR processing using F1
instances within a month of development. NGCodec estimates that such an
implementation could allow the run of a complex video processing needed to run a virtual
reality device using a head-mounted display in the cloud. FPGAs have an important
advantage over GPUs because the encoding involves processing that GPUs normally
transfer it to the CPU. FPGAs are also more power efficient in this of scenario.

Figure 3. FPGA acceleration using an F1 instance

46
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Amazon has a partnership with Xilinx, one of the major FPGA manufacturers. This is a
list of specifications for F1 instances:
• Xilinx UltraScale+ VU9P fabricated using a 16 nm process.
• 64 GiB of ECC-protected memory on a 288-bit wide bus (four DDR4 channels).
• Dedicated PCIe x16 interface to the CPU.
• 2.5 million logic elements.
• 6,800 Digital Signal Processing (DSP) engines.
• Virtual JTAG interface for debugging.
Despite these advantages, FPGA programming remains a hard discipline. Amazon has
announced that it won’t release tools for FPGA development (Xilinx will cover this
aspect) but instead it will focus on the cloud side where it will release development kits
and a machine image that the developers can use to get started with the F1 instances.

3. THE PROPOSED APPROACH

Infrastructure as a Service (IaaS) is a form of cloud computing that provides virtualized


computing resources over the Internet. IaaS is one of three main categories of cloud
computing services, alongside Software as a Service (SaaS) and Platform as a Service
(PaaS). Amazon Web Services, Microsoft, Google or Rackspace can be found amongst
the main companies that provide Infrastructure as a Service business plans. IaaS is
suitable for a number of situations where demand on the infrastructure is volatile or where
new companies do not possess the capital to invest in hardware. Both scaling and
temporary needs for hardware are covered by IaaS. Cloud providers supply the resources
in an on-demand manner from their pools of resources located in data centres. In our
proposed approach the cloud provider should also invest in a pool of FPGA chips linked
to an existing computing infrastructure [Fig. 4] in order to be able to provide a testing
infrastructure to their clients. Among the advantages of using a IaaS are the rapid
innovation due to the readiness of the infrastructure when needed and the focus on the
core business, in our case, the FPGA development. The payment model eliminates the
expenses involved in deploying on-site software and hardware. Despite this, users should
monitor their IaaS console in order to avoid being charged for unauthorized malicious
access.
Due to the fact that cloud providers own the IaaS infrastructure, the monitoring and the
management of the systems may become difficult for users. Also, if an IaaS provider
experiences downtime, users' workloads may be affected.
Cost-effectiveness may arise in specific scenarios. For example, once a certain software
is tested, it can be moved from the IaaS environment to a proprietary infrastructure in
order to free the resources for other development projects. Cloud computing uses
automation in order to meet the unpredictable requirements of the users. Cloud software
automates the provisioning and the scaling of the computing resources, storage and
network. In our approach, we envision the development of an interface that would
simplify the entire provisioning system through process templates used in the workflow,
in order to simplify the provisioning activities.

47
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

Figure 4. Hybrid IaaS-F


FPGA

Inn the followin ng section, thhe “Consumeer” is any coompany that will, w at somee point in a
ceertain projectt, need to impplement FPGA A functionalitty. The “Provvider” is a company that
exxposes cloud--based FPGA A build solutioons, with a diifferent approoach that those presented
inn chapter 2, “RRelated workk”.
For exempliffication purp rposes, the Consumer develops an a FPGA-baased PCIe
coommunicationn card that im
mplements vaarious encrypption algorithm
ms for secureed point-to-
pooint communnication. Apaart from the technical
t dessign, the FPGGA must imp plement the
foollowing com
mponents: a soft-core
s proocessor, Etherrnet over Fibbre Optics (SSFP), PCIe
coommunicationn core, Direct Memory Acccess, Reconffigurable encrryption moduules, Timers
annd other auxxiliary compponents. Impllementation times, as evvaluated on a standard
w
workstation with
w 8GB RA AM and an Intel
I Core i77 @ 2.6 GHzz, 3720, 4 cores and 8
T
Threads:
Table 1.. Time comparrison

Bit strream
Step Synthesis Placee and route
generation

Minimum time 20 min 330 min 2 min


m

Maximum time ~230 min ~3300 min ~3 min


m

488
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Thhe times werre recorded using


u various configurationns, versions of the IP corres and the
prresence/absennce of customm IP cores thaat were implem mented in thee project. Projoject design
annd implemen ntation was done
d on a Xiilinx Kintex 7 FPGA chhip and Vivad do IDE as
deevelopment environment.
e Xilinx Kinteex 7 has the best price-peerformance raatio on the
m
market with 478k
4 logic ceells, VCXO component, AXI IP and AMS integrration. The
FPPGA chip has also 32 × 12.5G1 GTs, 2,845
2 GMACss, 34Mb BRA AM and DDR R3-1866. It
caan be purchassed at half thee price of simiilar 40nm devvices and utiliises half the power
p used
byy the previouss generation.

Figure 5. Com
mpile time com
mparison

Suuch a FPGA design can have h a lot of trial-and-erro


t or steps due to the fact thaat there are
m
many compon nents, apart from the actual FPGA architectuure, that neeed to be
innterconnectedd – kernel module,
m electtronics, user space applications. If, on such a
smmall/mid-sized system, eveery small moodification onn the FPGA can last up to 9 hours of
immplementation n, on larger chips, such as Xilinx's UltraScale Kintex K and Virtex,
V the
reequired completion time caan be a lot laarger than 9 hhours. When doing
d compattibility and
reegression testiing, multiple configurationns of the sam
me architecturee are a requirrement. All
teests were do one on Linuux. Differentt kernel verrsions did not n impact the overall
immplementation n time. Accoording to [2], Linux woorkstation perrform better than their
W
Windows coun nterparts. The reduction timme is exempliified in Fig.5.
At this point, the managem
A ment team is faced with m multiple optioons: accepting g the large
immplementation n times, if thhey can meeet the proposse time-to-maarket criteriaa, invest in
haardware in order to decrease the overalll testing and vvalidation tim
me or use reso
ources from
a third party. If
I there are a small numbber of projectts which willl benefit from m hardware
innvestments, chhances are thhat such an appproach wouuld be a pooreer option oveer the third
onne – renting from
f a dedicaated provider. An evaluatioon of a comm mon investmen nt for a 10x
sppeedup, basedd on the abovee tests:

49
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 2. License cost

Component Estimated cost

10 × Workstations 10 × 1000 USD = 10k USD

10 × IDE Licenses 10 × 1800 USD = 18k USD

Administration and human resource costs


6 × 800 USD = 4.8k USD
(6-month project span)

All costs were estimated at official “store” price list available online for an average
configuration. No particular vendor or technology was targeted. It is clear that for an
average 6-month FPGA step, the TCO can be very high. If the resource demand is higher,
some companies may take into account private clouds, but with more costs [6]. If a
company has a strict timeline and a tight-budget it is obvious that FPGA development is
not an option.
A “Provider” would be any company that is willing to invest in hardware resources and
software licences in order to provide a pay-per-use service. The service model is
implemented in various Software-as-a-service and Infrastructure-as-a-Service setup [4].
The initial investment is larger than in the case of a single company, but the Provider
would pursue a larger time frame for ROI, as opposed to the TCO of a single company. It
is the provider's goal to approach companies that would like to develop FPGA
architectures in-house for their project but do not have a constant flow of such projects.
From a technical point of view, the Provider would use a cloud solution such as
CloudStack or OpenStack, fully automated, with resource management and dynamic
control as in [8]. For each client that starts a project, a number of Virtual Private Servers
would be started, with reserved resources according to the payment plan. If a 60 month
TCO is planned, an expected 50 clients / year and a maximum number of 10 simultaneous
clients, the investment plan for processing power would be:
• 400 cpu cores (4 cores / client × 10 clients × 10x speedup (parallel), as above)
This can be summed up as ~18 servers (dual processor 12 core = 6 clients, 64GB RAM)
that price at around 2300USD. Monthly datacentre costs and administration can be around
1500USD, with a total of 90k USD for 60 months. The estimated TCO for 60 months, not
accounting for unexpected situations, would be:
• 18 × 2300 + 1500 × 60 = 131k USD
The average market price per hour for 4 vCPUs with 16GB RAM is variable [5], but in
the current year is, is around $0.23. For a project with ~500 runs of 16 hours/run of
compile times, this would be around 1800$. It is clear that this pay per use model is much
more efficient for any small company than having to invest in its own infrastructure. As
for the Provider's TCO, if the targeted 50 clients per year are achieved, it can change from

50
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

ROI to Profit asa early as thee second or thhird year of tthe project. However,
H therre are cases
inn which one may
m not want to expose priivate code to a third party,, but, in such a situation
thhe overall costt of the projecct should accoount for this ssituation.
Thhere are oth her side-costss such as cuustom appliccation develoopment whichh must be
acccounted for. Neverthelesss, these are one-time
o onlyy and do not have such a high price
thhan the infrasttructure and running
r costs..
Frrom a client's point of view
w, the whole process
p can bbe summarized as in [Fig. 6]:
6

Figure 6. Process overvview

Thhe client maanages its prooject throughh a web appplication. From here he can c control
asssigned resouurces and keeep costs undeer observationn. After a prooject is defined, he will
suubmit it for execution
e (2).. The cloud resource
r brokker would have already reeserved the
reequired servers (3) and sttarts up the execution.
e Thhere are two different staages in the
acctual flow: an
a interactive one (4) andd a batch/com mpile one (66). The interaactive step
foorwards a virttual desktop (5) to the cliient in whichh he can do FPGA F developpment in a
deesired, IDE, as chosen from f the prooject's settinggs. Such interactive apprroaches of
innterface forwaarding are alrready in use – example [7]. At this point, it is cleear that the
cllient does nott have any sofftware licensee cost. After hhe finishes thhe design, he will
w launch
thhe project for synthesis, place and routee and bit streaam generation inside the cloud.
c This
step is done in background (6). After thee completion of the processs, successful or not, the
cllient will be notified.
n He then
t will log back into its account, access the allocaated virtual
ennvironment anda continue as required. All communnication betw ween the clieent and the
prrovider are secured.
s The generated bit stream cann be seamlesssly integrateed into the
cllient's worksppace by meanns of a VPN or other clouud transport method,m such as the one
prroposed in [3]]. Several autthentication scchemes can bbe implementeed, based on the t client’s
neeeds.
Frrom an econoomic point off view, the prroposed workkflow would require
r custom
m software
coomponents th hat need to be developped exclusiveely for this kind of project. The
coommunication n part (VPN, dedicated tunnels,
t a.s.o.) can be achieved by oppen source
sooftware, such
h as OpenVPN N, or by usinng dedicated network harddware - such as Cisco’s

51
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

ASA platform. The virtual machine underlying infrastructure, with the necessary tools to
managed hardware and software resources can also be implemented by the use of open
source projects as CloudStack or OpenStack. The web portal, however, would have to be
custom build for such a setup. A basic starting point for the development costs could be
summarized as:
Table 3. Development costs

Project step / Minimum


Necessary team
team time

Requirement
Project architect, Lead programmers, Team leaders 3 months
analysis

Frontend Graphics designer, User experience designer, 2


4 month
development Front-end developers, Lead programmer

Backend Team leader, 3 to 5 software engineers, Lead


5 months
development programmer

Database design architect


Database design 2 months
Team leader, Lead programmers

Platform Linux system administrator,


2 months
integration Lead programmer

Validation and
3 to 5 Quality assurance operators 4 months
testing

Reporting Lead programmer, reporting team (2 to 4) 3 months

4. RUNNING TESTS AT THE REMOTE SITE

In case of parallel building of multiple hardware modules, testing and validating them at
the provider’s site could prove to be more efficient than retrieving bit-streams from each
generated module and testing them one at a time. This could be the case when the client
has only a few development FPGA boards but has many module versions. In such
situations, the provider might offer an automated testing and validation service [Fig.7]

52
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Figurre 7. Testing innfrastructure foor FPGA cloudd

Iff the providerr chooses to implement hardware


h validation at a customer’s
c request, it is
obbvious that its infrastruccture must be b equipped with multipple FPGA chip c types.
Fuurthermore, additional
a coonfiguration for
f bridging PCIe cards to virtual machines
m in
w
which the cusstomer’s codde is being developed.
d A
As proposed before, the most m cost-
efffective deplooyment wouldd be that which is based on a cloud computing
c plaatform and
evvery client recceives a num
mber of virtuall machines. T
These virtual machines
m can
n be of two
tyypes: developpment and vaalidation. Thee developmennt ones are ussed to build the project
w
whereas the vaalidation oness must be connnected in somme way with the t FPGA chips that are
taargeted for testing. A stanndard methodd of connectinng a hardwarre devices diirectly to a
viirtual machine is by using a Input/Outpput MMU virttualization (Inntel’s VT-d anda AMD’s
V
Vi), sometimees referred ass pass-througgh. By usingg an IO mem mory manageement unit
viirtual guests can
c directly access
a hardw
ware resourcess that are preesent on the hypervisor.
h
Thhe motherboaard and the BIOSB firmwaare must also support this feature, aparrt from the
CPU. There iss a differencce between PCI P and PCIee devices in the sense th hat all PCI
reesources at once
o can bee passed thrrough while PCIe devicces can be configured
inndividually. This
T situation arises from thhe protocol’s designs. Theere are, howev ver, certain
reestrictions am
mongst differeent hypervisoors with regarrd to this techhnology. Thee following
taable summarizzes compatibiility:
Table 4. Hyppervisor-PCIe support

Hyperv
visor Supporteed

Linux KVM
K Yes

VirtualBox Only on L
Linux

Hyper-V 2005,2008,2012 No

VMwarre Version/P
Product depenndent

53
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Given this restriction, the provider which would operate the infrastructure will have to
pay attention to such features. Since KVM is the most used hypervisor on Linux, one
choice would be the use of OpenStack, which can also integrate with VMware ESXi
hosts. When using an x86-based processor, the hypervisor makes use of the native CPU
instruction to achieve maximum performance. Intel VT's features enable faithful
abstraction of the full prowess of Intel CPU to a virtual machine. All software in the VM
can run without any performance or compatibility hit, as if it was running natively on a
dedicated CPU. Live migration from one Intel CPU generation to another, as well as
nested virtualization, is possible [9].
On OpenStack, the compute service is responsible for interacting with the underlying
hypervisor on a particular host. It controls the hypervisor through an API server. Linux
KVM is the default hypervisor for Compute. The PCI pass-through feature in OpenStack
allows full access and direct control of a physical PCI device in guests. This mechanism is
generic for any kind of PCI device. Thus, an FPGA card which is installed on a PCI/PCIe
bus would be visible to the virtualized guest as if it was directly connected. Before the
existence of this technology, any device exposed to the guest machine would have been
emulated. The data exchange between the emulated device and the physical one would
have been mediated by the hypervisor, thus leading to lower performance and often the
lack of full capabilities of the exposed device inside the virtual machine. One of the
problems introduced with device pass-through is when live migration is required. Live
migration is the suspension and subsequent migration of a VM to a new physical host, at
which point the VM is restarted. This lack could be resolved by the use of PCI hot
plugging, a technology which permits the insertion and removal of PCI devices at
runtime. Even if guest and hypervisor support is present, the PCI card must also be
capable of supporting such a feature. In the case of many FPGA-based PCI devices, this is
not to be expected since the actual physical removal and insertion when a system is
powered in is unlikely to happen.
Some PCI devices provide Single Root I/O Virtualization and Sharing (SR-IOV)
capabilities. When SR-IOV is used, a physical device is virtualized and appears as
multiple PCI devices. Virtual PCI devices are assigned to the same or different guests. In
the case of PCI pass-through, the full physical device is assigned to only one guest and
cannot be shared [9]. If the device under contains at least one PCIe core which provides
virtual functions, the underlying test infrastructure must be able to assign to multiple
virtual test machines a corresponding virtual function, in order to fully test the FPGA
configuration.
If using the pass-through method on a hypervisor which runs multiple test systems, then the
system must be configured in such a way that it will never forward the same FPGA card to
multiple virtual machines. This may lead to system instability and potential data-loss.
Another solution for a testing farm could be based on a private cluster architecture. This is
fundamentally different from a cloud approach due to the fact that is non-interactive. A
typical workflow is presented in Fig. 8:

54
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Figure 8. Privvate cluster arcchitecture

U
Users submit jobs through a dedicated clusterc interfaace which ressides on the Computing
C
N
Node. After th he cluster maanager receivves the jobs, it evaluates their requireement, and
quueues them up p for submisssion. When a node is free, the cluster manager
m submmits a job to
thhat node. When the job staarts executingg on the assiigned node, itt will first prreconfigure
thhe environmen nt and fetch, if necessary,, any input ddata. After thee execution iss done, the
reesult is usuallly stored in a common loccation or on a Storage Eleement. At thiss point, the
usser that submmitted the jobb can view itss result. Thiss setup, howeever, implied d additional
coomplexity wh hich might noot be visible in
i the beginniing. The mainn challenge is to have a
framework inteegrated with the cluster’s batch system m that can maanage the FP PGA card’s
reesources, commmunicate wiith the host, exchange daata with the software processes that
innteract with th he FPGA andd provide ann easy to use Application Programming g Interface
(AAPI) for high--level software programm mers. At the saame time, anyy hardware prrogrammer
m
must be able to o design a speecific acceleration modulee without the need
n of interaacting with
otther components such ass AXI busess or DMA ccores. In ordder to be geeneric, the
arrchitecture must
m be venndor-independdent and muust be able to accomm modate any
accceleration module
m that im
mplements thhe required interfaces annd signals, without
w any
paarticular harrdware requuirements. Since a largge computinng cluster is i usually
heeterogeneous in terms of host system CPU architeccture, data buuses, operatin ng systems
annd referenced d libraries/funnctions by user
u jobs, thee framework must be able to allow
ruuntime enviroonment rebuilding and it must
m automaticcally manage the underlyin ng changes
beefore a job is launched.
A cluster farmm that is usedd exclusively for testing w would be moore cost efficiient than a
clloud infrastru
ucture. In the testing scenaario, the hostt on which a FPGA confiiguration is
vaalidated mustt have at leasst one FPGA A connected. The differencce is that of computing
poower and harrdware resourrces requiredd. While the ccloud architecture would need more
m
memory and CPU C in order to support virtual machinnes, a workerr node inside the testing
clluster would need
n moderatte resources since
s its only job is to testt and commun
nicate with
thhe FPGA and not to run virrtual machinees.

55
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

There are also software considerations that need to be taken into account when deploying
a cluster-based FPGA testing farm. Neither the cloud approach nor the cluster one provide
out-of-the-box solutions for the scenarios discussed in the present paper. In both cases
FPGAs need to be programmed with the firmware under test. Usually a complete device
program involves a system reboot, especially true for PCIe setups, since the
motherboard’s PCIe root may need to acknowledge and configure the newly instantiated
cores.

5. CONCLUSIONS

FPGA-based solutions are becoming more and more visible as they are being
incorporated into various electronic devices, in most cases as an extension to System on a
Chip architectures. Companies that don't have FPGA development as their primary
market, or small and start-up companies without a lot of investment funds can benefit
from cloud services in order to decrease their FPGA development time and
implementation costs. The alternative would be investing into in-house resources and
managing a private grid or computing cluster. This approach can have a big financial
impact on the whole project. Apart from the technical knowledge, the resources can be
provided by an external party, such as the hypothetical company presented in the previous
section. Thus, a company can exactly evaluate the monthly costs of such a service, for a
limited period of time, which can lead to an overall lower development cost.
Current design flow can prove to be time consuming and the required resources might not
fit into a project's financial flow, if the FPGA component performs an auxiliary, but
mandatory, function, with regards to the overall project.
A cloud based approach, with interactive application forwarding and a solid back-end for
batch building, can be a viable alternative for such situations. Cloud has changed the
industry both in terms of financial returns and in the visible support that it offers to small
businesses. By reducing the total cost of ownership, small companies can now access the
power and versatility of FPGA chips to develop new and innovative solutions. We can
envision a near future where all these technologies can help concepts such as smart cities
to become reality. The smart city concept promotes the use of Information Technology to
enhance the performance and quality of the services offered to citizens. FPGA
technologies can enable applications such as urban traffic management where real-time
response is crucial. Interconnection between different systems can be done in a secure
manner by using dedicated FPGA’s for secure communication. In addition, companies
such as Xilinx have been releasing tools that simplify the use of more common languages
such as C and C++ to program FPGAs. This is an important factor in popularizing the
FPGA development among start-ups.
A service offer represents a quantified set of services and a range of applications that end
users can use through the provider. Service offerings should include resource guarantees,
metering, resource management and billing cycles. Service management functionality
should be developed in a such way that the defined services can be quickly and easily
implemented and managed by the end user. For a cloud service to be truly on demand and
at the same time to able to meet service level agreements, it must be able to manage at any

56
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

time an increase in workload. Management solutions must possess the ability to create
policies around workload and data management to ensure the efficiency and performance
delivered by the system running in the cloud.
The paper has outlined existing solutions, with their advantages and disadvantages, and
has proposed a new workflow that uses current cloud technology. Such a public service
would be endorsed by a dedicated company which has focus on providing cloud FPGA
compile services. The backend would be a cloud stack such as OpenStack or CloudStack,
with a web frontend through which a client can manage his projects and resources. All
communication would be encrypted and data would be stored on the provider’s disks only
during the project. Confidentiality and data integrity would be assured through normal
means such as Service Level Agreements and Non-Disclosure agreements.

6. REFERENCES

[1] LabVIEW FPGA Compile Cloud Service, http://www.ni.com/white-


paper/52328/en/
[2] LabVIEW FPGA Compile Worker Compile Time Benchmarks,
http://www.ni.com/white-paper/14040/en/
[3] Laurenţiu A. Dumitru, Sergiu Eftimie, Dan Fostea, An FPGA-Based cloud storage
gateway, 2nd International Conference SEA-CONF, Academia Navală Mircea Cel
Bătrân, Constanţa, 2016
[4] Gorelik, Eugene. Cloud computing models. Diss. Massachusetts Institute of
Technology, 2013.
[5] Yi, Sangho, Artur Andrzejak, and Derrick Kondo. "Monetary cost-aware
checkpointing and migration on Amazon cloud spot instances." IEEE Transactions
on Services Computing 5.4 (2012): 512-524.
[6] Greenberg, Albert, et al. "The cost of a cloud: research problems in data center
networks." ACM SIGCOMM computer communication review 39.1 (2008): 68-73.
[7] Banik, Thomas, et al. "System for virtual process interfacing via a remote desktop
protocol (rdp)." U.S. Patent Application No. 10/527,913.
[8] Dumitru Laurenţiu A., Sergiu Eftimie, et al. "A novel architecture for
authenticating scalable resources in hybrid cloud." Communications (COMM),
2016 International Conference on. IEEE, 2016.
[9] OpenStack documentation https://docs.openstack.org
[10] Intel Virtualization Technology, http://www.intel.com/content/www/us/en/
virtualization/virtualization-technology/intel-virtualization-technology.html

57
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

PREDICTIVE ANALYTICS FOR TRANSPORTATION INDUSTRY

Stefan Iovan 1*

ABSTRACT

The movement of goods from one point to another is complex - the transportation industry
is a blend of the networks, infrastructure, equipment, information technology, and
employee’s necessary to transport a large variety of products safely and efficiently
throughout the nation and around the world. Although generally considered separate
transportation entities, trains, planes, ships and trucks are actually part of an integrated
network. One of the defining characteristics of today's transportation industry is
intermodal or logistic services, the movement of freight through a coordinated and nearly
seamless system that uses multiple modes of transportation. Many products now move
worldwide in standardized containers that easily transfer onto truck chassis, rail cars,
and ship decks as they move from origin to destination. Also, the paper presents the role
and importance of products logistics and services in order to obtain the competitive
advantage. After a short presentation of logistics evolution, we will define the logistics
concept, respectively integrated logistics. The analysis of the logistics activities is based
on the total cost concept and it has as a purpose the efficient and effective management of
the physical flows of raw materials, materials and finite products, and of the international
flows. The competitive advantage is ensured through the harmonization of the logistics
function with the other company functions and through the integration of the logistics
chain of all upstream and downstream organizations in order to ensure a high level of
consumer service at economical costs under the farm of supply chain management. In the
end we present the main tendencies in the logistics evolution in the Romanian firms under
the circumstances that their international dimension increases.

KEYWORDS: transportation industry, intermodal, information technology, logistics,


supply chain, supply chain management, predictive analytics

1. INTRODUCTION

Technology plays an important role in the transportation industry, allowing companies to


respond to evolving requirements to move an ever-increasing number of products.
Technological enhancements are transforming the operation and management of the
transportation system in revolutionary ways [1].

1
* corresponding author, Associate Professor PhD, Computer Science Department, West University
Timisoara, Romania, stefan.iovan@infofer.ro

58
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Companies are trying to optimize their transportation systems to better forecast demand
and analyze all the competing resources such as workforce, routes, cost, transport mode,
equipment and demand all while gaining more efficiency from its operations.
With the rapid growth and demand in the transportation industry, companies are
struggling to efficiently transport materials and utilize their employees while reducing the
risk of turnover.

1.1. Customer Pain Points

Optimizing routes [2] and workforce utilization pose significant challenges:


ƒ Globalization and the changing economy are placing greater demand on the
transportation system to move shipments faster and cheaper.
• The most critical aspect of the transportation infrastructure problem is
congestion or capacity constraint. Motor vehicle overcrowding is a major
problem in the largest cities. The shortest route often isn't the fastest.
• Companies are collecting more data than ever from gps, bar-coding,
RFID, time stamping, etc. While more data can give companies more
information, it also makes decision making more complicated.
• Accurately forecasting demand challenges companies - not really
knowing what is coming, when and who needs it when, can tie up
resources.
ƒ There is tremendous cost in recruiting and training employees. Keeping full
crews for the trucks, trains, ships, and planes challenges even the best
organizations in the industry.
• Companies can't pay their employees enough to live on the road - they
need a compensation package that enables employees to balance their
personal and professional needs.
The need to optimize routes and workforce utilization is becoming even more crucial with
fluctuating fuel costs. Even with the ability to pass some of the fuel costs along to
customers via fuel clause adjustment, companies are experiencing shrinking profit
margins and capacity constraints in terms of both physical and human capital.

1.2. Conversation Starters

The challenges associated with forecasting and optimizing delivery routes and workforce
utilization can significantly impact a company's bottom line [3]. We've been able to
identify and solve a variety of issues by:
• Accurately forecasting demand and planning to optimize resource allocation;
• Conducting impact analysis of changes in the plan;
• Developing an operating plan that is tightly matched to traffic patterns;
• Enabling shortest-path-based algorithms using a large number of applicable
factors;
• Providing resource retention scenarios and plans.

59
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

With these data mining, operational forecasting and research practices in place,
transportation companies are able to optimize their daily operations thereby reducing
costs and increasing revenue growth potential.
Table 1. Questions to determine the issues of the transport company
Optimizing routes Workforce utilization
How do you determine the best method of Is your transportation demand affected by
transport? seasonality? What are your busiest months
and how do you plan for workforce shifts?
How do you currently determine the best How much does it cost to recruit and train
route for a shipment/delivery? new employees?
What are your experiences in meeting How long does it take to train an employee?
customer Service Level Agreement
regarding on-time deliveries?
At what frequency, if at all, are you What is your turnover rate by employee
incurring significant refunds to customers classification or role?
because of late deliveries?
What processes can you attribute to any How do you retain your best engineers,
lost business due to poor delivery times? drivers, pilots, or captains?
What factors do you consider when Do you know the peaks and valleys of when
determining the most efficient route? an employee may consider leaving and what
to do to keep them?
Do you know which customers, business Do you know the peaks and valleys of when
segments or routes are most profitable? an employee may consider leaving and what
to do to keep them?
How do you allocate your resources? Do Do you know what your applicable factors
you use a forecast or demand planning are? Who are your best employees and what
application? makes them the best?
Is your network constrained? Is demand
exceeding capacity?
How do you adjust for seasonal and
climate variations by location?
How do you get information to and from
the drivers and integrate the data into
systems to identify trends for the next
time this route, season, event, etc. occurs?
With the volatile fuel prices, at what point
do you start passing on the increased cost
to your customers?

1.3. Solutions

The key to transport operations and workforce utilization understands all of the variables
such as weather, season, and mode of transportation, employee experience, workforce
utilization, traffic patterns, and levels of congestion or network constraint, size of load,

60
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

fuel prices and more [3]. With all the variables in play when developing your shipping
plans, you will uncover the most beneficial options.
Analytics decision makers no longer have to rely on intuition. These analytics capabilities
give you the power to make decisions and build productive protocols based on:
• acts using your data and industry models;
• thorough forecasting;
• simultaneous consideration of all options;
• simulation and “what if” analysis;
• careful predictions of outcomes and estimates of risk;
• State-of-the-art decision tools and algorithmic engines.
Analytics offer significant value to transportation companies by providing capabilities to
better allocate resources, adapt to changes in cost, demand, traffic patterns, weather,
employee turnover and satisfaction and economic conditions.
Companies using the analytics software will benefit improving performance, maximizing
the workforce, optimizing routes, boosting retention rates, thus resulting in improved
profit margins and top line revenue growth.

2. e-LOGISTICS – MULTIMODAL TRANSPORT MANAGEMENT

2.1. Definition and concept of development logistics

United Nations Convention relating to multimodal transport of goods adopted in Geneva


year 1980 (Multimodal Transport Convention) defines multimodal transport as: carriage
of goods by at least two different modes of transport as basis of a multimodal transport
contract from a place in a country where the goods are taken over by a multimodal
transport delivery contractor and transported to a designated place located in another
country.
The rising cost of energy and raw materials in 1970, after concerns imposed to ensure
efficient procurement of raw materials for production, in order to do that were developed
implementation programs related to the objectives of the market effectively. In this
context one can speak of logistics and production logistics supply in materials
management. Organizing functions (supply, production, marketing, finance, sales, etc.)
resulted in "spreading" logistics activities within different functions, the objectives of
which there were many contradictory, which generated excessive costs or even money
loss.
Integrated Logistics is based on the analysis of the total cost of logistics activities, with
the focus level of consumer services. This means that at a certain level of consumer
services need to minimize logistic costs more than to try to minimize the individual
activities cost, as components of logistics. Attempts to reduce the cost of individual
activities may result in an increased total cost. The concept of total cost of logistics
activities must include: the level of customer services, transportation costs, storage costs,
cost control and computerization process, selling, manufacturing costs related to quantity.

61
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Logistics comprises [4]: planning, implementing and controlling the physical flow of
materials and finished goods from point of origin to their point of use, in order to make a
profit and to satisfy customer requirements. The goal is to create logistic supply chains,
namely as physical flows of materials to finished products for final consumers with the
lowest costs, knowing that their share in the total cost of the product is around 30-40% for
processed products.
One of the most prestigious groups of specialists in logistics in the U.S., The Council of
Logistics Management, uses the established term "logistics management", which is define
as "the process of planning, implementing and controlling the efficient flow and storage
bidirectional and efficient goods and services and related information between the point
of origin and point of consumption in order to meet consumer requirements" [5]. It is a
general definition that manages highlight of the physical distribution management and
delivery services, having as main objective the consumer need, and profit motive to
ensure competitiveness.
This definition of logistics covers all three activities: planning, implementing and
controlling, and not just one or two. The providers who rejects this view that supports the
logistic involvement in implementing more than planning policies, by ignoring the
strategic function of logistics.
Today, logistics has gone from so-called traditional approach which was focused on
targeting the point of consumption, to the approach focused on flow and storage inverse
(reverse logistics) and also activities which are born at the point of consumption.
Logistics "reverse" must receive more attention now, by the increasing profitability of
online purchases.
The purpose logistics, as shown in the definition is "to meet consumer demands", which
means that logistics strategies and activities must be based on the desires and needs of
consumers rather than on the requirements and capabilities of the other parties involved in
the process. This involves designing and managing an effective and efficient
communication system, and for businesses to communicate effectively with their
customers to know their needs and wishes.
All these aspects are very important, but shouldn’t be neglected as cost component. In
multiple businesses, the cost of logistics activities reaches or exceeds 20% of the total cost
producers, even reaching 50-55% of the cost of raw materials, which could turn into
important competitiveness through cost.
The strategic dimension of logistics is underlined [6] and defined as "the process of
managing in a strategic acquisition operations, movement and storage of materials, semi-
finished and finished products, starting from suppliers across the enterprise and its
distribution channels with the objective of maximizing profit and prompt resolution of
customer orders".
If we chain logistics activities to enterprises producing goods and services, we can
highlight three important segments that interact with each other, i.e. supply logistics
(logistics inputs), production logistics and materials management and logistics
distribution. If we take into account the relationship with marketing logistics, given the

62
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

marketing opportunities, aimed among others to maximize sales various market segments;
we find that logistics as nothing more than a "marketing oriented".
In this context we can say that logistics aims to achieve a level of service to consumers in
terms of the five matches: the right product at the right place at the right time in the right
quantity and at the right cost. The term "appropriate cost" is specific to the firm's logistics
system. P.F. Drucker [7], with more than four decades ago, argued that improvements in
marketing and logistics are an important way to obtain products at economical cost.

2.2. Supply Chain Management - SCM

A general definition of the concept of supply chain of an enterprise includes all suppliers,
production capacity, distribution centers, warehouses and customers with raw materials,
semi-finished goods stock and the stock of finished goods and all resources and
information involved in customer satisfaction. Synonymous terms are logistics network or
supply network.
Another definition, more specifically, states that the supply chain is an economic process
(business process) that connects suppliers, manufacturers, warehouses, logistics,
distributors and end customers and has the form of an integrated collection of skills and
resources aimed at service delivery and products to customers.
In its classical sense, the term supply chain management includes all coordination and
management of all activities involved in the supply chain to achieve optimal performance.
Currently, some analysts call these activities of supply chain operations, in an effort to
reflect better the high degree of collaboration between the actors involved in this process.
In the context of the analyzed company, supply chain starts with suppliers and ends with
its supplier’s enterprise customers. Frequently, the supply chain is described with costs
and revenues involved in each component:
- costs with suppliers/raw materials;
- transport costs;
- costs of production;
- storage and distribution costs;
- revenue from customers.
In the context of e-business, the importance of the chain of request (demand chain)
covering order processing processes was reconsidered. Current economic conditions
require firm’s short-term goals, such as:
- reduce inventory;
- revenue growth while maintaining constant fixed costs;
- improved performance.
SCM applications manage forecasting applications, synchronizing supply with demand
(requirement). Matching demand and ensures the ordered product at the right time. The
concept of authors [8], which in turn cites the views of other authors, can be delineated at
least possible functions of SCM applications:

63
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- planning - is strategic area of supply chain management, which is defined


resources management strategy for a particular business;
- management providers - developing a set of processes prospect of suppliers,
supplier selection, purchase and payment of goods, monitoring relations with
them;
- manufacture/production - scheduling, launching and execution of the production
of the goods, if the company carries out production activities;
- delivery and logistics - coordinate receipt of orders from customers, the operation
of a network of warehouses for cargo management, and so on;
- returns management - manage products returned by customers or by suppliers
and customer relationships with various complaints.

3. ANALYTICS CAPABILITIES

Streamline the mining and analysis of vast amounts of data. Analytics streamlines the
process to create highly accurate descriptive and predictive models based on analysis of
vast amounts of data from across an enterprise.
Look beyond where your company's been to where your company can go. Accurately
analyze past operational and financial performance over time to forecast the future. You
can identify previously unseen trends and anticipate fluctuations so you can more
effectively plan for the future. Factors that impact your business, such as 3rd party
econometric data, national and global market conditions, weather, traffic patterns and time
of year, can be identified, quantified and included in your forecasting processes for
improved results [9].
Create models with unlimited variables that optimize any scenario down to the details.
Analytics offers a wide array of mathematical optimization, project/resource management
and scheduling, simulation, decision analysis and other operations research capabilities to
enable you to build detailed models of your business or organization and create an
accurate picture of current, future and potential performance.

4. BUSINESS INTELLIGENCE

Financial management aims to provide permanent organization with the necessary


resources and to exercise control board on the effectiveness of these cash transactions are
involved.
First functioning will be purchased fixed assets. Fixed assets include, in the case of
multimodal transport contractor: buildings, land and equipment. The purchase of such
goods is deemed to be capital expenditure.
Second, the resources to be used for the circulation current, or the current that are
expected to be converted into cash or consumed in a period of 12 months and in a normal
operating cycle. In the category of current assets of a multimodal transport contractor may
be included, for example, fuel.
The third destination resource is for operating expenses which include: rent, taxes, pay
subcontractors, insurance and payroll. Ensure adequate resources for operating expenses

64
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

must be one of the major concerns of the financial management of a multimodal transport
contractor.
Last destination resource is the establishment of reserves (provisions) that materializes
cash resources readily usable by high liquidity to ensure the company's ability to meet
special events - the so-called "dark days syndrome" reserves are also part of current
assets. The funds distributed shall finance business for the agreed credit period.
In order to monitor and control financial performance literature is recommended to
calculate two indicators. The first of these is the return on capital employed calculated as
a percentage ratio between net profit and total assets. The second is the commercial rate of
profit calculated as a percentage ratio between net profit and sales.
We propose to calculate a third indicator that is called return on assets, calculated as the
ratio between sales and total assets and expressed in lei showing how sales are generated
by a loan in assets. The second aspect for safe financial management, a multimodal
transport company need to consider is solvency. Indicators that we would recommend for
use are leverage and interest coverage.
Indebtedness is a relationship between passive and indicating the share capital and credit
for multimodal transport company is calculated by dividing long-term loans to long-term
loans aggregated with shareholders' funds. Interest coverage is calculated as the ratio
between net profit and interest on the loan.
Finally, last but not least important aspect on which multimodal transportation company
should focus is liquidity. The most important indicator that provides information on the
situation in relation to available funds outstanding commitments of the current accounting
firm is the index which is calculated as a ratio between current assets and liabilities due in
the short term, less than 12 months.
Management structures have come to expect a powerful tool for measuring, monitoring
and tracking of key business processes. Tightening competition, managers now need to
solve complex problems, often insufficiently clearly defined, with implications for
multiple plans. Whichever solution is chosen among the leading business intelligence
functions include:
- Planning controls lifting and cargo delivery in time efficient working conditions,
distance traveled and resources used;
- Automating logistics processes;
- Workflow management in real time;
- Optimal use of space charge;
- Monitoring performance indicators (KPI) and generate reports tailored to the
organization;
- Interfacing with ERP, TMS, WMS, GPS (including SAP ®);
- Automatic route planning for all types of transport: retail distribution, LTL
transportation, transport containers, intermodal transport, postal and courier
services, transport tanks, vehicles, naval, air or mixed.

65
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

SCM package provides several modules for different functions in the supply chain - sales
of companies that purchase this package, select and implement those that fit their
business. Among these functions are:
- collaboration in the supply chain;
- collaborative design;
- collaborative achievements;
- demand planning and supply;
- production planning;
- event management in the supply chain;
- performance supply chain management, etc.
As for Romania, we must remember that the Romanian companies still operate in the
manner of classic traditional SCM solutions while recognizing the importance of effective
business tool. Some of them have implemented SCM solutions, but limited in number and
functionality. In response to market needs, the solutions presented in most cases the
modules integrated into ERP application, but also with connections to applications like
SCM, CRM, and BI.

5. TRANSPORTATION INDUSTRY BACKGROUNDER

The U.S. transportation industry is healthy, providing the nation with the most extensive,
highest quality transportation system in the world. For the near term, the forecast is for a
continued trend of expanding capability and improving service in each of the major
transportation modes. Relative to other regions of the world, the United States retains the
geographical and overall transportation advantage.

5.1. Trucking Industry

Trucks have carried the lion's share of the country's freight for the past 40 years. The
trucking industry's market dominance will remain unchallenged in the near future, as no
other form of transportation reaches so many areas with such a proven record of reliability
and flexibility. Analysts expect the trucking industry to expand an average of 2 percent
per year, with certain sectors (e.g., intermodal container operations and small package
delivery) accounting for most of this growth. In fact, small package freight has grown to
become the third largest product sector trucks carry, and demand is increasing at more
than 5 percent a year.
In spite of the positive market indicators, the industry faces several significant challenges
in maintaining its competitive advantage. The long-haul portion of the trucking industry is
facing a serious shortage of qualified drivers. Presently, there is a shortage of 80,000
drivers, leaving 5 - 10 percent of the fleet idle.
Moreover, driver turnover is approaching 150 percent in some companies, costing the
industry $3 billion annually for the recruitment and retention of new drivers. The U.S.
economy is partly to blame, as truckers are able to find better paying and less stressful
jobs in other fields. Over the long run, the industry will have to improve compensation

66
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

and working conditions to correct this problem. Information technology will play an
incentive role here as well, allowing drivers more control over their schedules [10].
The long-haul trucking industry is also undergoing intense competition as deregulation
has opened the door for many new players. To stay competitive in this environment, many
small trucking companies operate on a modest 5.8 percent operating margin. The jump in
fuel financially squeezes many truckers who were unable to raise surcharges enough to
cover the added expense. This situation will continue as long as competition remains
fierce.

5.2. Railroad Industry

The railroad industry is simply holding its own. Despite the decline in some railroad
statistics, the industry remains highly capable of moving the country's freight. In fact, the
steady increase in labor productivity is a testament to the impact of information
technology on operations and the ease of trans-shipping containers by rail.
Railroads recognize the significant opportunity that intermodal containers are providing
the transportation industry. Train companies are adjusting routes and purchasing more
double-stack container cars specifically to target this important market niche. Currently,
intermodal traffic makes up 28 percent of the total rail car loadings, and container
handling has increased nearly 300 percent since 1980.
The railroads will remain a key segment of the transportation industry in the future. Rail
will continue to be the most efficient transporter of bulk commodities and general freight
that must move over long distances.
It is the mode of choice for outsized and oversized shipments and will continue to play an
important and growing role in the intermodal freight business. Furthermore, the railroads
are making dramatic improvements in efficiency through increased investments in
infrastructure, re-tracking mainlines, and state- of-the-art locomotive designs.

5.3. Air Cargo Industry

In addition to moving express packages, air freight carriers transport high-value, time-
sensitive manufactured goods that need to move long distances.
The air transportation industry is especially capital-, labor-, and technology-intensive.
Airline and air cargo companies are very conscious of market trends and are constantly
striving to make their operations more efficient and to earn a greater share of the market.
Productivity in the industry is up an amazing 110 percent since 1980 because of wise
investment in aircraft, use of a "hub-and-spoke" network, terminal enhancements, and
information technology.

5.4. Maritime Industry

While the U.S. trucking and air carrier industries lead the world in market share and
capability, the U.S. merchant marine fleet and ports continue to lag behind their
counterparts in Europe and Asia. The shipping industry has fewer and fewer U.S.-

67
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

registered and -operated ships because it costs less to register ships elsewhere. The story
is much the same for the U.S. port infrastructure and operation. Terminal throughput and
efficiency is greater in Europe and Asia. Nevertheless, U.S. ports and inland waterways
handle more than 2 billion tons of cargo a year, and waterborne traffic represents 95
percent of the U.S. overseas trade.
Analysts expect world deep-sea trade to grow at 3-4 percent a year, doubling in the next
20 years. Ports in the United States can expect a rising tide of business as international
trade increases with Europe, Asia, and South America. To meet growing demand, the
industry must prepare now to handle the next generation of container ships. The newest
container ships will be wider and require a deeper draft.
Presently, only 5 of the top 15 U.S. ports have adequate channel depths, and of these, only
the West Coast ports have adequate berth depths. Dredging these ports will not be easy
because of environmental concerns. The ports will also need to expand their terminal
infrastructure to include larger cranes and greater container-handling and storage capacity.
Moreover, the state and local governments that control the ports will have to approve and
fund projects to improve rail and highway connections to handle the increased volume of
containers that will flow in and out of the ports.

6. PREDICTIVE ANALYTICS FOR BUSINESSES TRANSFORMATION

In Big Data Era, companies in a variety of industries, including transportation, more


acutely feel the need to collect information most relevant to their businesses [9, 11]. They
want to find a way to make decisions based on accurate information at the right time. To
achieve this, the development of systems that can transform the data collected information
from which to generate actions that benefit the business directly.
Some of these benefits may be:
• Identifying growth opportunities – internal and external data analysis can help to
shape and forecasting business results, allowing identification of the most
profitable growth opportunities, as well as some differentiators for business.
• Improving business performance – data analysis facilitates agile planning,
forecasting more accurate budgeting and improved planning is an important tool
for decision making.
• Better management of risk and regulatory requirements – data analysis allows
improved reporting procedures, identification of risk areas such as compliance
violation, fraud or reputation damage.
• Using emerging technologies – can identify new opportunities for obtaining
information relevant to business management, based on new technologies.
Very few companies use the full potential of predictive analysis. On the other hand, this
approach often comes into conflict with trying to keep under control and lowering IT
costs. Therefore, identifying and capitalizing on available information and identifying
information sources that can support the generation of new opportunities have become the
main challenge.

68
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In the digital age in which we operate, the volume of data generated is growing. Every
minute of every day, more than 200 million e-mails sent globally, and Google gets more
than 2 million search requests. It is estimated that by 2020 around 450 billion online
transactions take place every day. Given this context, organizations consider data as a
fourth factor of production, besides capital and human and material resources.
Effective integration of predictive analysis in business management has a measurable
impact on performance because it allows better planning, weather clearer and more
informed decisions, resulting in increased profits, reduce risk and increase business
agility.
Using predictive analytics is useful transport companies to ensure that all relevant
functions involved in the process so as to obtain an overview and to minimize information
leakage. Information about consumers are a typical example in this respect: sales have
billing addresses data and record transactions, marketing has information obtained from
the analysis of feedback coming from consumers and the logistics department has details
on concrete deliveries. All this information can sometimes double or vary from one
department to another [11, 12].
A coherent analysis of all these data can be a challenge, but an accurate analysis and
enhanced business can generate added value. Companies that monitor and estimate how
consumer behavior and preferences evolve it without exceeding the limits of
confidentiality, can gain significant advantages.

7. LOGISTIC IN ROMANIAN INDUSTRIAL ENTERPRISES

In the last fifteen years, the managers of Romanian industry faced multiple problems
caused by difficult economic instability, inflation, shortening product life cycles,
environment, market conditions and diversification of demand. All this makes it difficult
to find a way to organize the most effective and efficient the logistic companies in general
and in particular.
Currently, there are industrial companies, primarily those with private capital, which have
a good timing and efficient logistics organization; however we can speak of an effective
organizational structure of logistics in a few enterprises.
A concern of integrating logistics activities under a single authority is meeting since 1990.
Thus, we can say that companies who designed and developed logistics organization as
organizational structures were oriented primarily towards transport and storage activities
that are included logistic managers to control more than 70%, followed by the order and
delivery, inventory control and supply.
This large and medium-sized logistics function is headed by a manager to the position of
Vice President or Executive Director. The introduction and development of logistics in
the value chain of Romanian enterprises were made under the pressure of two forces,
namely:
• Transition to a market economy, a process that Romanian firms subject to
increasing pressure of competition, lower costs and eliminate competitors access

69
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

to public resources leads firms to reduce capital requirements by resorting to


logistics;
• Integration of Romanian firms in the global economy, especially taking
performance of components and operations of the final products whose realization
depends on participants located in several countries, requires discipline and rigor
logistics contract enforcement [13].
For better international coverage Romanian logistics system should allows businesses,
primarily SME’s, to exploit the opportunities offered by the development of international
trade in goods and services, by exploiting the competitive advantage offered by logistics.
Currently there are a number of challenges facing the transport system and its
development that led to increased competitiveness and development in energy efficiency
technologies. Thus, a more accessible transport system should be prioritized traffic
management.
Regarding freight for the whole transport system efficiency, in addition to encouraging
projects aimed scheme applied to freight and developing new solutions for the delivery of
goods, smart technologies and Intelligent Transport Systems (ITS) play an essential role
in achieve the objectives of transport policies on developing an efficient, effective and
sustainable.
The role of ITS is generated by the problems caused by traffic congestion and
development of new information technologies in simulation for real-time control,
communication networks, providing the opportunity to address issues related to urban
traffic management in an innovative manner. Congestion reduces efficiency of
transportation infrastructure and has a negative impact on travel time, pollution, energy
consumption.
In urban traffic management and logistics are key factors underpinning the successful
implementation of ITS and for involving stakeholders, development of partnerships, the
application of essential tasks, optimize network performance, maximizing automation and
minimize human intervention at the operational level.

8. CONCLUSIONS

Predictive analytics helps companies move from a decision-making process


retrospectively and intuitive, proactive and oriented to one based on information. Based
on this approach, companies can build models with which to make better forecasts on
scenarios as realistic and provide the opportunities and challenges associated with them
[12].
In a digital world in constant evolution, with growing volumes of data generated, only
those companies that will build on the information will be able to increase their
competitiveness. Business performance will depend on the ability of the organization to
have access to accurate information and to exploit. Those organizations that can
understand and filter relevant information that will know how to discover patterns and act
on the results thus obtained will become businesses with top performance.

70
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

9. REFERENCES

[1] Iovan, St., Increasing the Individual Performance through Learning and
Innovation, Iasi: Editura PIM, Proceedings of the International Conference:
“Innovative methodologies and technologies in work based learning within the
VET sector”, Romania, ISBN: 978-606-13-2026-4, pag. 168-180, (2014);
[2] Iovan, St., Daian, Gh. I., Enterprise Services Architecture in the World of Information
Technology, Tirgu-Jiu: “Academica Brancusi” Publisher, Annals of the “Constantin
Brancusi” University of Targu Jiu, Fiability & Durability, Supplement No. 1/2012,
(SYMECH 2012), ISSN: 1844 – 640X, pag. 375-381, Romania, (2012);
[3] Iovan, Şt. O analiza de proces a managementului traficului feroviar romanesc,
Bucureşti: Editura AGIR, Buletinul AGIR, Nr. 1/2013, pag. 63 - 67,
http://www.buletinulagir.agir.ro/articol.php?id=1653 , Romania, (2013);
[4] Kotler, Ph., Marketing Management, Bucharest: Teora Publisher, (1998);
[5] Lambert, D.M., Stock, J.R., Strategic Logistics Management, Boston: Irwin
Homewood Publisher, (1993);
[6] Christopher, M., Logistics and Supply Chain Management, London: Pitman
Publishing, (1993);
[7] Drucker, P.F., The Economy's Dark Continent, New York: Fortune 65, no. 4, (1962);
[8] Fotache, D., Hurbean, L., Integrated software solutions for business management,
Bucharest: Publishing House, (2004);
[9] Ivanus, Cr., Iovan, Şt. Governmental Cloud – Part of Cloud Computing, Bucureşti:
Revista Informatica Economică, Vol. 18, No. 4/2014, pag. 91 – 100,
http://www.revistaie.ase.ro/content/72/08%20-%20Ivanus,%20Iovan.pdf; (2014)
[10] Iovan, St., Litra, M., Developments in Freight and Passenger Railway, Targu Jiu:
“Academica Brancusi” Publisher, Annals of the “Constantin Brancusi” University,
Engineering Series, Issue 4/2013, (CONFERENG 2013), ISSN: 1842 – 4856, pag.
149 - 164, Romania, (2013);
[11] Iovan, St., Ivanus, Cr. Business Intelligence and the Transition to Business
Analytics, Targu Jiu: “Academica Brancusi” Publisher, Annals of the “Constantin
Brancusi” University, Engineering Series, Issue 4/2014, (CONFERENG 2014),
ISSN: 1842 – 4856, pag. 150-156, Romania, (2014);
[12] Iovan, St., Identify Public Services and Software Oriented Architecture Services
Taxonomy, Tirgu-Jiu: “Academica Brancusi” Publisher, Annals of the “Constantin
Brancusi” University of Targu Jiu, Fiability & Durability, Supplement No. 1/2015,
(SYMECH 2015), ISSN: 1844 – 640X, pag. 46 - 52, Romania, (2015);
[13] Stolojan, Th., Competitiveness of Romania, Bucharest: Logistics Management
Magazine, no. 2, (2003);

71
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MRC – THE THEORY OF LAYER-BASED DOCUMENT IMAGE


COMPRESSION

Costin-Anton Boiangiu 1*
Luiza Grigoraş 2

ABSTRACT

The concept of Mixed Raster Content describes a compound document image as being
composed of several layers, each containing a part of its visual information. Usually, three
layers are sufficient for classifying the types of content present in such an image: a foreground
layer, a background layer, and a mask layer. In this context, MRC-based compression
schemes promise to be more efficient than classical ones (where a single algorithm is used to
compress the entire image), due to their implicit content-adaptive nature, because each layer
can be compressed separately with a suitable algorithm (JPEG, JBIG etc.).

KEYWORDS: MRC, Document Compression, Image Compression, Data Compression,


Image Processing, OCR, Resampling Filters.

INTRODUCTION

From the image processing field of research, image compression is of high interest
nowadays, as performance (of transmission of images over the Internet or by fax) and
storage issues (for online libraries, online databases of images) have become more
prominent, with the increased rates of information exchange across electronic media.
There is a variety of compression algorithms and image formats, each being designed for
a particular purpose and image type in mind (De Queiroz et al., 1999). For example, JPEG
and JPEG2000 are designed for natural image compression (Rabbani and Joshi, 2002),
while JBIG2 favors images with recurring symbols (Haneda and Bouman, 2011), such as
document images.
A compression algorithm good for all types of images does not exist (De Queiroz et al.,
1999; De Queiroz, 2005), but the variety of compression algorithms can be exploited by
using a generalized framework, one which could adapt the algorithm to the characteristics
of the image, at least to some degree. De Queiroz et al. (1999) stipulates that this could be
accomplished by a compression algorithm based on the concept of Mixed Raster Content.
This concept describes an image (comprising information in various forms: text, pictures,
line art) as a composite of layers, each with different semantics and different visual and
signal characteristics, correspondingly.

1
* corresponding author, Professor PhD Eng., Politehnica University of Bucharest, 060042 Bucharest,
Romania, costin.boiangiu@cs.pub.ro
2
Engineer, Politehnica University of Bucharest, 060042 Bucharest, Romania, luiza.grigoras@cti.pub.ro

72
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MRC COMPRESSION BASICS

Figure 1. Simple MRC decomposition scheme: (a) Abstraction. After the decomposition process,
many gaps are left in the foreground and background layers (marked with "X"). The bottom images
show the data-filling of the foreground layer with a constant color (in this case, white). (b)
Exemplification on a concrete image (layers obtained with proposed codec).

73
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

An MRC image can be decomposed into as many layers as one considers it necessary, but
usually three layers are considered to be sufficient in categorizing image information and
compressing it accordingly (ITU-T Recommendation T.44, 2005), as illustrated in Fig 1:
the foreground layer (text color), the mask layer (identifying text and contours) and the
background layer (background colors and embedded images). This decomposition of an
image is most advantageous, allowing the separate compression of layers with the most
suited algorithm for each one and at different compression ratios (Pavlidis, 2017). For
example, the background layer may contain continuous-tone images and/or areas of
constant color, which can be well compressed using JPEG or JPEG2000. The foreground
layer usually contains only areas of constant color, and thus can be heavily compressed
(Haneda and Bouman, 2011) using the same algorithms. However, the mask layer is a bi-
level image, containing many characters, i.e., recurring symbols, for which JBIG or
JBIG2 obtain better performances, as mentioned above. Therefore, MRC is implicitly
adaptive and versatile (Mukherjee et al., 2002) and can unify several compression
algorithms. Furthermore, it can be reduced to each one of them, when a single layer is
used for the image (ITU-T Recommendation T.44, 2005).
Because of this decomposition, a pixel in the three-layered image has now 49
corresponding bits in MRC representation (if the image is represented in a 24 bits per
pixel format) (De Queiroz et al., 1999; Mukherjee et al., 2002), but this increased size of
the image is compensated by the final compression ratio, as a result of compressing each
layer with a different algorithm. Thus, this method can compete with a single compression
algorithm applied on the image as a whole and can even outperform it. The performance
gain of a three-layered based MRC approach is obvious in the case of simple text
documents, as outlined by De Queiroz (2005), while for continuous-tone images this
approach is less suitable.
Several MRC-based codecs have been proposed: (De Queiroz, 2005) analyzed a JPEG-
MMR-JPEG MRC encoder and proposed one based on JPEG2000; (Zaghetto and De
Queiroz, 2007) propose another coding scheme based on the H.264 video compression
standard that works very well for still images too and compared an MRC JPEG2000-
based solution to it. In both solutions, JBIG2 was used for mask compression. In
Mukherjee et al. (2002) an MRC-compliant codec using JPEG2000 and JBIG
compression is analyzed; the emphasis is on the segmentation steps, which are specially
designed to suit and encourage JPEG2000 compression.
There are also proprietary solutions based on image layer decomposition. The most
famous example is DjVu, which uses a JBIG-like algorithm to encode the mask (Bottou et
al., 1998) and a decomposition algorithm that is similar to that implied by MRC, except
the fact that the foreground layer encompasses the mask layer, by preserving text (letter
shapes) associated with its colors (De Queiroz, 2005).
Following (De Queiroz, 2005; Mukherjee et al., 2002; Zaghetto and De Queiroz, 2008), a
general MRC compression scheme can be derived that would include several steps,
among which the most important are: image decomposition into layers, data-filling of
foreground and background layers and actual compression of each layer using a suitable
compression algorithm. The decomposition step is considered to be the most important
one (Haneda and Bouman. 2011). The pixels of the image are separated into two layers,

74
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

background and foreground, based on their corresponding color levels. The result of this
stage is the mask layer, which acts like a sieve, making it possible to distinguish between
the pixels belonging to each layer. A clear separation between the foreground and the
background layer should be accomplished, in order for the subsequently applied
compression algorithms to perform well (Haneda and Bouman, 2011). The data filling
step is also important, because it prepares the layers for the actual compression, therefore
greatly influencing compression performance. The empty regions resulted in the
foreground and the background layers are filled in with a certain color (or colors), in order
to make the layers compress well, according to a specific compression algorithm. To
improve compression furthermore, the background and the foreground layers should not
contain abrupt color transitions; the data-filling process must ensure the smoothness of the
resulting image (De Queiroz, 2005; Mukherjee et al., 2002).
Many other refinements can be added to this scheme, in order to reduce layer size and
therefore improve compression ratio (layer downsampling, region/stripe decomposition
etc.). Due to the fact that they usually contain very smooth color transitions and few
details, the foreground and the background layers can also be subsampled in order to
reduce the size of the layer to be compressed (De Queiroz et al., 1999), without the risk of
losing important information in the process. Better compression rates can be obtained by
also preparing layers for the type of compression they are supposed to be put through. For
example, for JPEG compression, an approach based on the decomposition of the image
into blocks is advantageous (De Queiroz, 2005). These blocks are individually segmented,
filled in and smoothed. For JPEG2000 compression, which is not block-based,
"smoothness guarantees compactness" (Mukherjee et al., 2002). That is why the data-
filling stage has to be treated carefully, as it plays an important part in obtaining a good
compression ratio.

DATA-FILLING USING INTERPOLATION AND RESAMPLING

De Queiroz (2000) emphasized that the problem with a data-filling algorithm is to


produce smooth transitions between non-empty pixels and empty pixels filled in with a
custom color. Although this can be accomplished by various means, a simple method
based on segmented filtering and an iterative method based on DCT was presented,
whereas other papers (Lakhani and Subedi, 2006; De Queiroz, 2006) discuss methods
based on wavelets. (Mukherjee et al., 2002) emphasized the importance of the data filling
step in preparing layers for JPEG2000 compression and presented a data-filling algorithm
which implies interpolating image pixels; smoothing an image at a high degree is obtained
using a Gaussian filter.
On these grounds, we can relate to the field of image interpolation and resampling, in
order to perform the data-filling of a layer. The interpolation and resampling steps can be
united in a single one and be ideally performed by using a low-pass filter, which
eliminates high-frequency components that may cause aliasing in the resulting image. The
sinc function is the ideal low-pass filter (Smith, 1997): its finite impulse response has a
perfectly rectangular shape; it has a transition band of width 0 (thus the steepest roll-off)
and no ripples, neither in the passband nor in the stopband. The function has the form:

75
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

 sin (πx ) x ≠0
sinc ( x ) =  , (1)
 πx
1, x= 0

Although any function can be used as a base function for interpolation (Thévenaz et al.,
2000), all efforts were concentrated towards the sinc function, either to find good and
easy-to-compute polynomial approximations for it or to find a practical form for using it.
This function has infinite support, thus it cannot be used in practice as it is. Its domain has
to be restricted. Directly truncating the function results in discontinuities at the boundaries
of its restricted domain which causes severe artifacts in the filtered image; these include
ringing, blurring, and aliasing (Mitchell and Netravali, 1988; Hauser et al. 2000). Another
form of restricting it is by multiplying the function with a finite-support window function
(apodization function) as stated by Thévenaz et al., (2000). The usage of a window over
the sinc function guarantees smooth transitions to zero and thus eliminates discontinuities,
reducing the sinc filter to a practical width (Smith, 1997).
In order to make a thorough comparison between the filters applied in MRC compression,
we selected several from three main families: polynomial, exponential and windowed-sinc.

POLYNOMIAL AND EXPONENTIAL FILTERS


Filters in this generic family have the great advantage of being easy to compute. They also perform
well (Thévenaz et al., 2000), trying to approximate the ideal sinc function and reducing ringing and
aliasing, but at the cost of blurring. Higher order polynomials usually give better results (Lehmann
et al., 1999). Table 1 lists the polynomial filters used for this study, with their corresponding
mathematical form.

Table 1. Polynomial and exponential resampling filters.

Filter name Filter function


f (x ) = 1, |x| ≤ 0.5
Box 
0, |x| > 0.5
f ( x ) = 1 − |x|, |x| ≤ 1
Triangle 
0, |x| > 1
f ( x ) = 2|x| − 3 x 2 +1,
3
| x| ≤ 1
Hermite 
0, | x| > 1

76
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Filter name Filter function


Spline
Cubic_H4_1
( a = −1 )
Cubic_H4_2
1 f ( x ) = (a + 2 )|x| − (a + 3)x 2 +1, |x| ≤ 1
3
(a =− ) 
2  3
Cubic_H4_3 a|x| − 5ax 2 + 8a |x| − 4a , 1 < |x| ≤ 2
3 
(a =− ) 0, | x| > 2
4
Cubic_H4_4
2
(a =− )
3
 − x 2 + 0.75, x ≤ 0.5
3rd order B- 
f ( x) = 0.5( x − 1.5) ,0.5 < x ≤ 1.5
2
Spline
(Quadratic)  0, x > 1.5

 |x |3 2 |x | ≤ 1
f (x ) =  − x2 + ,
4th order B-  2 3
Spline 
1 1 < |x | ≤ 2
(Cubic B-  (2 − |x |) ,
3

Spline) 6

 0, |x | > 2
[
1
]
f ( x ) =  (12 − 9B − 6C )|x| + (− 18 +12B + 6C )x 2 + 6 − 2B ,
3 |x| ≤ 1
6


1 < | x| ≤ 2
BC-family
[
1 3
]2 1
 (− B − 6C )|x| + (6B + 30C )x + + [(− 12B − 48C )|x|+ 8B + 24C],
6 6
0, | x| > 2

B C
1 1
Mitchell
3 3
Notch 1.5 -0.25
Catmull-
0 0.5
Rom
Robidoux 0.3782 0.3109
Robidoux
0.2620 0.3690
Sharp

77
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Filter name Filter function


 − x 2 | x |≤ W
 1
f (x ) = 
2
e 2σ ,
Gaussian  2π σ

0, | x |> W

The box filter is the simplest interpolating filter and has a sinc-shaped finite impulse
response, which makes it a poor low-pass filter (Parker et al., 1983). Results obtained
with the box filter may be satisfactory in some cases, especially when downsampling
(Thyssen, 2017). The triangle filter, also known as the bilinear interpolation filter, gives a
smooth, natural gradient pass between pixels when upsampling, in contrast to the box
filter (Thyssen, 2017). The Hermite interpolation function is actually the basic cubic two-
point interpolation function, the simplest case of a cubic polynomial with boundary
conditions settled (C0 and C1 continuity) as stated by Lehmann et al. (1999). Splines are
"piecewise polynomials with pieces that are smoothly connected together" (Unser, 1999).
They are easier to compute than sinc-based filter functions and are appropriate for multi-
resolution approaches which imply construction of image pyramids. Values for constant a
that have been established based on optimality principles include -1, -3/4, -2/3 and -1/2
(Lehman et al., 1999; Parker et al. , 1983). B-Splines are often used in practice (Thévenaz
et al., 2000). They are obtained by multiplying a base function (the rectangle function)
with itself, several times, obtaining higher order B-Splines. B-Spline approximating filters
perform the most blurring of an image (Thyssen, 2017).
BC-family filters have been deduced and discussed by Mitchell and Netravali (1988).
Based on experimental results, they were able to identify filters for which the best
compromise may be obtained, for some or even all types of image artifacts. The
"Mitchell" (B = C = 1/3, ideal filter) and "Notch" filters have been suggested by Mitchell
and Netravali (1988), Catmull-Rom has been discussed in the same paper and included in
this family, while the last two filters have been recommended in Thyssen (2017).
From the exponential filters, we have selected the Gaussian filter, which has been used as
a blurring filter in image processing for a long time now, being useful in Gaussian noise
removal. The Gaussian filter function slowly descends toward zero, but generally, it can
be considered zero for , where W is the chosen width.

WINDOWED-SINC FILTERS

In his article, Harris (1978), performs a thorough review and analysis of many window
functions, accompanied by various suggestive plots in time and frequency domains. The
following versions of window functions (Table 2) reflect their exact code implementation
and follow mostly Harris's (1978) analysis, thus being defined on [-N/2, N/2], where N is
the filter width. We also use the notation W = N/2.

78
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 2. Window functions.

Window function
Filter function
name
  x |x | ≤ W
Lanczos f ( x ) =  sinc   ,
Lanczos3 ( W = 3 )  W 
Lanczos4 ( W = 4 ) 
0, |x | > W
  πx  |x | ≤ W
Cosine f ( x ) = cos α  ,
Cosine ( α = 1 )   2W 
Cosine3 ( α = 3 ) 
0, |x | > W
  x | x| ≤ W
f (x ) = ∑in=0 ai cos πi  ,
Generalized Cosine   W

0, | x| > W
a1 a2 a3 a4 a5
Hann 0.5 0.5 - - -
Hamming 0.54 0.46 - - -
Blackman 0.426590 0.496560 0.076849 - -
Nuttall 0.355768 0.487396 0.144232 0.012604 -
Blackman-Nuttall 0.3635819 0.4891775 0.1365995 0.0106411 -
Blackman-Harris 0.35875 0.48829 0.14128 0.01168 -
Kaiser-Bessel 0.40243 0.49804 0.09831 0.00122 -
Flat Top 0.21557894 0.41663158 0.27726315 0.08357894 0.00694736
  x 2 |x | ≤ W
f ( x ) = 1 −   ,
Welch  W 

0, |x | > W
 x 3  x  2 W
f ( x ) = 6  − 6  +1, |x| ≤
2
 W W 


Parzen  3
W
21 −  ,
 x < | x| ≤ W

  W  2

0, |x | > W

79
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Window function
Filter function
name
  x cos πx  + 1 sin  πx  ,
| x| ≤ W
f ( x ) = 1 − 
Bohman  W  W  π  W 

0, | x| > W
 − x2 |x| ≤ W
Gaussian 
f ( x ) = e 2σ ,
2

(GaussianW)

0, |x| > W
The Lanczos window is built on a simple idea: a truncated version of the sinc function is
used to window the sinc function. Lanczos3 is quite popular (Thyssen, 2017) and gives
good qualitative results (Turkowski, 1990). The cosine window is the first member (α = 1)
in the family of functions presented in Table 2. More windows can be obtained by varying
α; the greater the α power, the smoother the window and the better the results, but with an
increased width of the main lobe (Harris, 1978). Cosine, Hann, Hamming, Blackman,
Welch, Parzen, Bohman and Gaussian (having a slightly different form than that of the
Gaussian filter) window functions are taken from Harris's (1978) paper on windows. All
the rest of the generalized cosine window coefficients are taken from WOLFRAM (2017).
The Blackman-Harris window as referred here is actually the 4-termed Blackman-Harris
window and Kaiser-Bessel is actually the sampled version of the Kaiser-Bessel window.
These two windows have been recommended by Harris in the same paper.
The rest of this paper comprises three sections: the proposed compression scheme, with
emphasis on the data-filling stage; the results of the codec evaluation on several document
images, analyzing the effects and performances of the selected resampling filters applied
in the data-filling stage; recommendations with regard to the filters and parameters which
are best to be used in this compression scheme.

REFERENCES

[1] Bottou, L., Haffner, P., Howard, P.G., Simard, P., Bengio, Y., LeCun, Y. (1998).
High-Quality Document Image Compression with DjVu. J. Electron. Imaging, 7, pp.
410-425.
[2] De Queiroz, R.L., Buckeley, R., Xu, M. (1999). Mixed Raster Content (MRC)
Model for Compound Image Compression. In Proceedings of SPIE Visual
Communications and Image Processing, volume (3653), pp. 1106-1117.
[3] De Queiroz, R.L. (2000). On Data Filling Algorithms for MRC Layers. In
Proceedings of the IEEE International Conference on Image Processing, volume
(2), pp. 586-589, Vancouver, Canada.
[4] De Queiroz, R.L. (2005). Compressing Compound Documents. In Barni, M. (ed.),
The Document and Image Compression Handbook. Marcel-Dekker.

80
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[5] De Queiroz, R.L. (2006). Pre-Processing for MRC Layers of Scanned Images. In
Proceedings of the 13th IEEE International Conference on Image Processing, pp.
3093-3096, Atlanta, GA, U.S.A.
[6] Glasner, D., Bagon, S., Irani, M. (2009). Super-Resolution from a Single Image. In
Proceedings of the 12th IEEE International Conference on Computer Vision, pp.
349-356, Kyoto, Japan.
[7] Paşca L. (2013), Hybrid Compression Using Mixed Raster Content, Smart
Resampling Filters and Super Resolution, Diploma Thesis, unpublished work
(original author name Paşca L., actual name Grigoraş L.).
[8] Haneda, E., Bouman, C.A. (2011). Text Segmentation for MRC Document
Compression. IEEE Trans. Image Process, pp. 1611-1626.
[9] Harris, F.J. (1978). On the Use of Windows for Harmonic Analysis with the Discrete
Fourier Transform. Proc. IEEE, 66, pp. 55-83.
[10] Hauser, H., Groller, E., Theussl, T. (2000). Mastering Windows: Improving
Reconstruction. In Proceedings of the IEEE Symposium on Volume Visualization,
pp. 101-108. Salt Lake City, UT, U.S.A.
[11] ITU-T Recommendation T.44 Mixed Raster Content (MRC), (2005).
[12] Lakhani, G., Subedi, R. (2006). Optimal Filling of FG/BG Layers of Compound
Document Images. In Proceedings of the 13th IEEE International Conference on
Image Processing, pp. 2273-2276, Atlanta, GA, U.S.A.
[13] Lehmann, T.M., Gönner, C., Spitzer, K. (1999). Survey: Interpolation Methods in
Medical Image Processing. IEEE Trans. Med. Imaging, 18, pp. 1049-1075.
[14] Minaee, S., Abdolrashidi, A., Wang, Y. (2015). Screen content image segmentation
using sparse-smooth decomposition. 49th Asilomar Conference on Signals,
Systems, and Computers, Pacific Grove, CA, pp. 1202-1206. DOI: 10.1109/
ACSSC.2015.7421331.
[15] Minaee, S., Wang, Y. (2015). Screen content image segmentation using least
absolute deviation fitting. 2015 IEEE International Conference on Image
Processing (ICIP), Quebec City, QC, pp. 3295-3299. DOI: 10.1109/
ICIP.2015.7351413.
[16] Mitchell, D.P., Netravali, A.N. (1988). Reconstruction Filters in Computer
Graphics. ACM SIGGRAPH Comput. Graph, 22, pp. 221-228.
[17] Mtimet, J., Amiri, H. (2013). A layer-based segmentation method for compound
images. 10th International Multi-Conferences on Systems, Signals & Devices 2013
(SSD13), Hammamet, pp. 1-5. DOI: 10.1109/ SSD.2013.6564005.
[18] Mukherjee, D., Chrysafis, C., Said, A. (2002). JPEG2000-Matched MRC
Compression of Compound Documents. Proceedings of the IEEE International
Conference on Image Processing, volume (3), pp. 73-76.

81
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[19] Parker, J.A., Kenyon, R.V., Troxel, D.E. (1983). Comparison of Interpolating
Methods for Image Resampling. IEEE Trans. Med. Imaging, 2, pp. 31-39.
[20] Pavlidis, G. (2017). Mixed Raster Content, Segmentation, Compression,
Transmission. Signals and Communication Technology Series, Springer Singapore,
DOI: 10.1007/ 978-981-10-2830-4.
[21] Rabbani, M., Joshi, R. (2002). An overview of the JPEG 2000 still image
compression standard. Signal Process. Image Commun, 17, pp. 3-48.
[22] Smith, S.W. (1997). The Scientist and Engineer’s Guide to Digital Signal
Processing, 1st ed., pp. 285-296. California Technical Publishing, San Diego, CA,
U.S.A.
[23] Smith, R. (2007). An Overview of the Tesseract OCR Engine. In Proceedings of the
9th International Conference on Doc Analysis and Recognition, volume (2), pp.
629-633, Curitiba, Parana, Brasil.
[24] Thévenaz, P., Blu, T., Unser, M. (2000). Image Interpolation and Resampling. In
Handbook of Medical Imaging. Processing and Analysis; Academic Press series in
biomedical engineering, pp. 393-420. Academic Press, San Diego, CA, U.S.A.
[25] Thyssen, A. (2017). ImageMagick v6 Examples - Resampling Filters. http://
www.imagemagick.org/ Usage/ filter (Accessed: January 25, 2017).
[26] Turkowski, K. (1990). Filters for Common Resampling Tasks. In Glassner, A.S.
(ed.), Graphics Gems, pp. 147-165. Academic Press, San Diego, CA, U.S.A.
[27] Unser, M. (1999). Splines: A Perfect Fit for Signal and Image Processing. IEEE
Signal Process. Mag., 16, pp. 22-38.
[28] WOLFRAM (2017). Filter-Design Window Functions. https://
reference.wolfram.com/ language/ guide/ WindowFunctions.html. (Accessed:
January 25, 2017).
[29] Zaghetto, A., De Queiroz, R., L. (2007). MRC Compression of Compound
Documents Using H.264/AVC-I. Simpósio Brasileiro de Telecomunicações, Recife,
Brasil.
[30] Zaghetto, A., De Queiroz, R.L. (2008). Iterative Pre- and Post-Processing for MRC
Layers of Scanned Documents. In Proceedings of the 15th IEEE International
Conference on Image Processing, pp. 1009-1012, San Diego, CA, U.S.A.

82
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MODERN TECHNOLOGIES AND INNOVATION – SOURCE OF


COMPETITIVE ADVANTAGE FOR TOURISM SMES

Alexandru Tăbuşcă 1*
Laura Cristina Maniu 2

ABSTRACT
Tourism represents a fundamental drive for economic growth, having the potential to aid
in poverty reduction through its capacity of generating jobs and incomes based on both
tourism and related services industry. Tourism, the fastest growing industry in Europe,
mainly represents a private sector industry, based on small and medium enterprises,
especially within the destination areas.
Inside Europe, and not only, SMEs are considered as an important factor for economic
and competitiveness growth, innovation, creating jobs and social integration. The effects
of the latest economic crisis have definitely marked the evolution of the modern SMEs.
Despite these negative aspects, the analysis of the indicators related to the tourism field
entrepreneurial activity indicates a net growth in Romania during the last years. The fact
is due to the innovative potential, flexibility and adaptability, all essential traits of the
SMEs field, traits that proved extremely useful in overcoming the latest economic crises.
The fast and continuous development of the information technologies brings profound
implications for the entire hospitality industry. The present day tourists are well informed
and responsible, becoming more and more demanding and generating a whole new level
of pressure on the tourism enterprises in order for them to adapt to this new situation. The
IT & C infrastructure and services are now a key element which can rapidly propel a
tourism SME, or in other cases can cause irreparable damage to a business in this field.
A good software for location management, a nice website and a good reservations and
support online system can make all the difference now.

KEYWORDS: Tourism, Competitiveness, Information Technology, SMEs, Innovation

1. INTRODUCTION

The hospitality industry followed an ascendant trajectory during 2014, at world level,
even though there were geopolitical issues and the general economic growth was actually
halted. This trend for hospitality was maintained during 2015 too, being a great catalyzer
for economic development and creation of jobs. Increased population living conditions,
demographic changes, added time for vacations, modernization of transportation and
infrastructure, as well as the continuous development of information technology are

1
* corresponding author, Associate Professor, PhD, Romanian-American University, Bucharest,
tabusca.alexandru@profesor.rau.ro
2
Lecturer Ph.D., Romania-American University, Bucharest, laura.maniu@gmail.com

83
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

elements that stimulate the continuous development of tourism. At global level, the
tourism and travel industry provides around 266 million jobs and contributes with around
9.5% at the global GDP. Revenues from tourism, at world level, increased constantly, so
as at 2015 level they amounted to 1230 billion dollars. The hospitality sector represents an
important part of the European economy too. Regarding to the labour market, no less than
10.2 million people are directly employed in this industry and thus bringing an important
plus for the general development and fiscal contributions. International competitiveness of
an economy depends decisively on the competitiveness of its enterprises, and this is based
on their strategy, structure, management and objectives [1].
At European level, and not only, tourism occupies an important place especially among
the small and medium enterprises (SMEs), those being considered an important factor for
growth and economic competitiveness, innovation, workforce occupation and social
integration. SMEs are the most important part of the economy, they represent the base of
economic growth and employment. During 2013, the values of the main SMEs indicators
show the major role they have within the economy, representing a real driving force for
economic growth and sustained development. Inside the EU 99% of the active enterprises
are SMEs, 60-70% represent the percentage of people employed in SMEs and 58% of the
added value generated by the economy comes from SMEs. [2]
The effects of the financial crises were felt at SME level too, affecting the performance
indicators. Nevertheless, in 2014, compared to 2013, we saw a rejuvenation of the SMEs,
the added value brought by them in the EU economy increasing with 3.3% and the
employment level with 1.2% [3]. According the European Commission previsions, the
positive evolution of the SMEs performance indicators will follow the same trend in
2016, as it did during 2015. The current estimations are a 3.7% increase related to the
added value, 0.9% for employment level and a 0.7% increase in number of SMEs for
2016. The Small Business Act (SBA) represent a strategic initiative of the EU, the first
policy framework for SMEs within EU, adopted in 2008. Promotion of entrepreneurship
and support for SMEs must be considered government priorities for all EU members. The
SBA aims to improve the entrepreneurship skills in Europe, as well as to eliminate the
obstacles on their development path. The main argument is that these companies are
characterized by flexibility and fast adaptation capacities to the changing economic
conditions, to the political environment and other perturbation factors, thus managing to
capitalize on existing opportunities on a faster pace. The SMEs create their
competitiveness edge based on the price-quality ratio, quality of their products/services,
company reputation, professionalism of the employees, post-sell relations with their
clients, distribution channels, management quality and, last but not least, innovation
capacity. The SME Performance Review represents the main instruments that the EC uses
in order to monitor and evaluate the progress of its member states regarding the
application of the SBA, offering information related to SMEs performance in the EU
countries and other 9 partner countries.
At EU level, the SMEs are mainly involved in five different sectors: commerce,
manufacturing industry, constructions, professional/scientific/technical activities and
HoReCa1. Within these sectors, according to a Post-Privatization Foundation study, we
1
HoReCa – comes from hotels, restaurants and café. Term widely used, especially in Europe.

84
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

can find more than 78% of the SMEs, more than 80% of the active workforce and more
than 71% of the total added value in the economy. At EU level, Romania occupies the 17th
place regarding total added value and the last position on the active SMEs ranking.
Looking the percentage of the employees within SMEs, Romania occupies a better 8th
place in the EU.

2. LEVEL AND DYNAMICS OF THE SMES PERFORMANCE IN THE FIELD


OF TOURISM, AT EU LEVEL AND IN ROMANIA

Despite the slow economic growth inside the advanced economies and the geopolitical
tensions in several regions, tourism still represents a large part of the world economy
(estimated at around 9% of the global GDP) and of the occupied workforce, while the
number of international travels continue to increase [4]. Europe represents the region with
most international arrivals, due to its rich cultural resources, world class touristic
infrastructure and great health and hygiene conditions. International touristic arrivals in
Europe increased by 3% in 2014, amounted to 582 million euro, while the tourism sector
revenues, in real terms, increased by 4%, amounting to 383 billion euro [5]. At world
level, Europe holds 51% of the total international arrivals and 41% of the international
revenues, representing the region with the most touristic destinations. Within this context,
the Central and Eastern Europe area was the only sub-region in Europe, and in the world
as a matter of fact, that saw a 5% decrease or arrivals, after three years of consecutive
growth. This situation is mainly due to the sharp decrease of arrivals to Ukraine (-48%),
thanks to the Russian aggression in the Crimean peninsula. Romania had a 12% healthy
growth of international arrivals in 2014. In our country, tourism is considered a priority
development sector, focusing on natural landscapes and rich history, with a more recent
focus on entertainment destinations on the Black Sea coast line. This sector is regarded as
having a huge potential, even on short term period, and being able to bring an important
contribution at the economic development of the country. In 2013, Romanian tourism
contributed 5.1% of the GDP, placing us on the 154 position at world level, according the
WTTC1 report.
Table 1. Distribution of number of active SMEs in tourism sector, number of employees and added
value, 2012 (UE28)
Enterprises Number of Number of persons Value added
enterprises employed (thousands) (million)
(thousands)
1825 10425 213425
Micro (0-9 employees) 1651 4352 72472
Small (10-49 employees) 159 2954 58200
Medium (50-249 employees) 14 1235 32182
Share in non-financial 8,2% 7,2% 3,5%
business economy total (%)
Source: Adaptation after the European Commission, Key indicators, accommodation and food
service activities, EU-28, 2012

1
WTTC - World Travel & Tourism Council

85
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The values of the main SMEs indicators, cumulated at EU level, expressed in percentages,
give us a good picture of the SMEs value in the European economy: 99% of the total
enterprises in EU are SMEs, two out of three employees work within an SME, 58% of the
added value in the economy comes from the SMEs [6]. The contribution of the tourism
sector at the non-financial enterprises economy can be summarized as follows: the
workforce was far larger (7.8% of the total) that the generated added value (3.5%), while
the weight of the number of enterprises was 8.2%. The most employees are found in
micro-enterprises (approx. 4351400 persons), their weight in the total SME workforce
being 41.75% in 2012. The small enterprises provide around 29% of the total work places
in SMEs and 12% of the tourism SMEs workforce is employed in medium enterprises.
Micro-enterprises contribute with over 33% at the added value provided by the tourism
SMEs in EU28. The small enterprises brought, during 2012, around 28% of the added
value of the SMEs sector, while the medium enterprises contributed with only 15.1% at
the added value total.
Around 91% of the tourism SMEs in EU28 are micro-enterprises, the small enterprises
cover around 8.7% of the total European SMEs and only around 1% are medium
enterprises.
Table 2. Number of persons employed by enterprise size class, accommodation and food service
activities, 2012 (UE 28)
Total SMEs Micro Small Medium-sized
(thousands) (% of total) (% of total) (% of total) (% of total)
UE28 10425 83 41,7 29 12
Germany 1990 90,3 30,0 42,3 18,0
Italy 1321,7 90,1 63,2 22,2 4,8
Spain 1219,1 86,1 54,2 21,7 10,2
France 991,8 81,4 47,5 27,4 6,5
Greece 262,3 98,2 72,9 17,9 7,4
Romania 154,1 89,8 34,7 39,2 15,9
Bulgaria 140 92,2 41,3 33,4 17,4
Source: European Commission, Key indicators, accommodation and food service activities, EU-28,
2012.
As we can see in Table 2, the leaders of the workforce employment rank in EU28 are
France, Germany, Italy and Spain. Around 90% of the employees of this sector are to be
found in SMEs. Regarding the employment percentages in the tourism sector in micro-
enterprises, Greece holds the 1st place with 72.9%. In Italy, the workforce is concentrated
in micro-enterprises at 63.2%, in small enterprises at 22.2% and only 4.8% in medium
enterprises. Romania has a structure somewhat similar to our neighbor Bulgaria,
regarding the repartition of the workforce in the tourism sector: 34.7% in micro-
enterprises in Romania and 41.3% in Bulgaria; 39.2% in small enterprises in Romania and
33.4% in Bulgaria and around 16% in medium enterprises, in both Romania and Bulgaria.

86
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 3. Value added by enterprise size class, accommodation and food service activities, 2012
(UE 28)
Total SMEs Micro Small Medium-sized
euro(million) (% of total) (% of total) (% of total) (% of total)
UE28 213425 76,3 34,0 27,3 15,1
France 34252 79,1 47,8 24,4 6,9
Germany 33298 85,4 24,4 38,9 22,2
Italy 26922 85,5 47,5 29,7 8,3
Spain 24410 81,2 39,3 25,9 16
Austria 7797 93,3 36,5 37,0 19,7
Romania 729 79,0 19,8 34,7 24,5
Bulgaria 614 85,7 19,6 30,4 35,7
Source: European Commission, Key indicators, accommodation and food service activities, EU-28,
2012.
The total added value of the EU28 SMEs sector amounts to 3557 billion euro, in 2012,
representing around 58% of the total volume of the EU28 economy. The relative
importance of the medium enterprises (defined as employing 50 to 249 people) was quite
low in the tourism sector, at EU28 level, contributing only around 38.5% of the added
value. More than half of the added value provided by the SMEs active in the tourism field
is brought by the first four EU economies, led by France (16%) and Germany (15.6%),
Italy (12.6%) and Spain (11.4%).

Table 4. Enterprises active in accommodation and food services activities, 2012-2014 (UE 28)
2012 2013 2014
EU 28 1825191 1825323 1827427
Italy 16,88 17,16 17,07
Spain 15,24 15,09 14,77
France 13,84 14,32 14,94
Germany 11,90 11,18 12,38
Greece 5,02 4,90 4,96
Hungary 1,72 1,62 1,60
Bulgaria 1,45 1,43 1,44
Romania 1,29 1,33 1,37
Source: Adaptation after the European Commission, Key indicators, accommodation and food
service activities, EU-28
At EU28 level, the number of enterprises active in the sub-sector of accommodation and
food services is on an increase trend since 2012. In 2014, next to 2013, their number grew
with 2104. More than half the active enterprises of the sector come from Italy, Spain,
France and Germany. In Romania the number of enterprises active in this sector is also on
the same ascendant path, in 2014 the Romanian enterprises of this kind amounting to
1.37% of the total at EU28 level.
Romania occupies the 17th place in the EU after the added value of the SMEs sector, with
a 0.7% of the total added value of the sector at EU level. Our country contributes with

87
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

0.34% at the added value of the SMEs from the accommodation and food services sub-
sector, ranked after Hungary but before Bulgaria.
The weight of the micro-enterprises in this sub-sector was especially high at EU28 level,
generating 34% of the added value and employing around 41% of the total workforce in
2012. The small enterprises were especially important for this sub-sector, with a weight of
27.38% of the added value and concentrating 29% of the workforce at EU28 level.
At global level, tourism is one of the best developing sectors of the economy, and
Romania bets on tourism for further international opening and economic development,
especially due to the fact that tourism also generates a significant amount of jobs in other
related sectors, such as transportation, entertainment and other services [7].
Innovation is a key for economic growth and the entrepreneurial spirit is very important
for innovation [1]. It is a recognized fact that entrepreneurship and entrepreneurial culture
play a vital role in increasing the competitiveness of any economy. Also, the lack of
entrepreneurial culture, lack of cooperation and of a strategic vision at least on medium
term if not long term, are factors that bring a negative impact on the development of
entrepreneurship in our country. Tourism entrepreneurship can be developed if the
economic, politic and social environment is favorable and if this activity is supported by
the community and government [8]. Countries that actively support innovation, taking
chances with own businesses and communication of success models by businessmen, tend
to inspire more people to become entrepreneurs [9].
Towards this end, authorities can play an important role in changing the attitude towards
entrepreneurs, taking into account our interruption of entrepreneurial culture development
during the dark ages of the communist regime that plagued Romania for almost 50 years. The
improvement of the entrepreneur image can be done by sharing success stories, by explaining
their role in creating new jobs in the economy. One of the most important barriers in
entrepreneurship development in Romania is the fear of failure, according to an EY study.
Table 5. Dynamic of SME activity, October 2011 - march 2016, according to activity sector
Sector Industry Constructions Commerce Transportation Tourism Services
SMEs that 17,01% 16,48% 24,20% 18,60% 10,00% 17,46%
decreased
activity
SMSs 67,22% 70,33% 62,23% 72,09% 63,33% 70,48%
functioning
in the same
parameters
SMEs that 15,77% 13,19% 13,56% 9,30% 26,67% 12,06%
increased
activity
Source: Carta IMM-urilor, 2016, CNIPMMR
Examining the enterprises based on their sector of activity has shown the following
elements: the services providers have the largest share of enterprises that do business at
the same level during this time (70.48%), followed by the SMEs in industry (67.22%) and
those in tourism (63.33%). Regarding the SMEs that decreased business levels, commerce

88
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

has the largest percentage (24.20%) and the lowest percentage appears in the tourism
sector (10.00%). The tourism SMEs hold the largest share of the enterprises that increased
business during the time of the analysis (26.67%).
Table 6. Evolution and trends for SMEs active in tourism, Romania, 2010-2014
Indicator 2010 2011 2012 2013 2014
No. of enterprises 24402 22210 23499 24297 25013
No. of SMEs 24379 22186 23473 24272 -
Number of 140564 169000 172000 175000 181000
persons employed
Gross value added
(RON million 5162,5 6095,7 8587,2 10947,9 -
current prices)
Source: România în cifre – breviar statistic, INS, Bucureşti, 2014, 2015; Anuarul Statistic al
României 2014, 2015
Demographically speaking, during 2010-2014, the number of SMEs had a different
evolution. In 2011 their number fell with approximately 9%, compared to 2010, while
during 2012-2014 their numbers increased constantly. The SMEs sector has been largely
affected by the global crises, with its peaks in 2009 and 2010. After that moment, the
sector saw a slight recovery but seems to have lost the 2012 favorable moment, the trend
of the Romanian SMEs being different from those found in the majority of EU countries,
according to the Post-Privatization Foundation studies. The data show an increase in
employment numbers in tourism SMEs starting with 2010. The total number of
employees in SMEs increased by 28% in 2014 related to 2010.
The gross added value shows the contribution of the enterprises, fields and sectors to the
national economy, being considered a relevant indicator of economic performance of
enterprises, because it eliminates from calculation the intermediate costs which are part of
the business turnover [10]. The data shows that this type of added value brought by the
SMEs from the sub-sector of accommodation and food services is continuously
increasing, starting from 2010.
In Romania, the large companies recover much more rapidly from the crises than the
SMEs, because the latter are less competitive and less innovative. According to the White
Book of SMEs, the innovation efforts of the SMEs have focused towards: new products
(33.30%), new management and marketing strategies (24.73%), new technologies
(23.18%), informatics systems modernization (7.85%) and training employees (7.03%). In
2016, related to the precedent year, we have a positive evolution in the decrease of weight
for SMEs that indicate absence of preoccupation towards innovation. The research also
shows that the main obstacles for research and development activities of SMEs were: high
costs (32.76%), not enough funding (31.75%), incertitude about the demand of innovative
products (27.01%), difficult access to relevant information regarding new technologies
(19.89%), difficult access to relevant information regarding markets (market researches,
statistics etc.) (19.62%), lack of public financing/co-financing instruments for research
and development and/or rigidity of eligibility criteria (13.05%), difficulties to find
partners for research and development (10.68%), lack of skilled human resources
(10.58%) and lack of medium and long term estimates about the evolution of their sectors

89
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

of activity (4.56%) (11). According to the same study above mentioned, the main
elements of the informatics technology used by Romania SMEs are: PCs (76,55%),
internet (74,36%), e-mail applications (68,70%), company website (28,10%), online
transactions (14,69%) and using an intranet (3,92%). From the previous year, we can see
a negative evolution related to the increase by 4.5% of the weight of SMEs that do use
basic information technology tools in their business. The tourism enterprises are
characterized by the largest share of responders that mentioned the use of internet,
computers, e-mail applications, company website and online transactions. The aim of
using the internet and intranet was mainly communication with suppliers and clients,
obtaining information about the business environment, promotion of products/services,
electronic transactions and payments, intra-organization communications.

3. INFORMATION TECHNOLOGY IN TOURISM, AN INNOVATION TRAIN


THAT WE CANNOT MISS

Romania is quite well known for the prowess of it programmers and IT workforce,
generally. While we have managed to become an important contender in several fields of
the information technology environment at world level, Romanian tourism does not seem
to have adapted to this 21st century realities very well.
Today, the internet access is something that is almost taken for granted in any civilized
country, with several countries even considering it a right in its own. Finland was the first
country to legalize mandatory access to internet to all its citizens, making access to the
global network a right that is upheld by the law [12].
Romania seems to have focused its tourism SMEs activity towards several niches: rural
tourism, agro-tourism and entertainment tourism (in incipient phases, at the Black Sea
Mamaia Resort especially). While these areas are all very good, they usually to not
provide a high margin of added value, mainly due to the numbers involved – being a
niche market, they do not attract vast numbers of international clients. The main
distribution channel for information about the agro-tourism, for example, remains the
word-of-mouth, with former clients spreading the information to other potential clients.
Even though the idea has its merits, this low-level business cannot bring a real step
increase in revenues, at country level. Romania should find a way to invest not only in
“boutique” tourism, but also in large scale business ventures in tourism, through both
public and private funding.
It may seem strange at first, but these potential increase in mass-tourism we consider
would bring a definite advantage to the SMEs too. With a large number of visitors,
brought by the large enterprises in tourism, the potential customer-base for SMEs
increases exponentially. The vast majority of the visitors that would travel to these large
enterprises will go out of their hotels/resorts and look for entertainment, food and other
services – things that can be provided at very high quality level and with flexibility
regarding the prices, by the SMEs. Also, the prices of the small “boutiques”
accommodation SMEs can be increased and thus, increasing the added value brought by
these businesses.

90
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Usually, innovation in tourism means:


- product innovation
- process innovation
- logistic innovation
- market innovation
Generally speaking, everybody considers that the involvement of IT field in tourism is
limited to e-tourism. This term, already an established one now, means an aggregation of
several aspects: information, reservations, payments, communications. All these can now
be done with the help of a computer or smartphone, over the internet. But, let alone the
possible misunderstandings and inaccurate information that can be an issues with this
approach, this entire paradigm is not something modern anymore! Actually, any sound
tourism business, SME or large level, has to have an online presence now – at least a
website and a reservation system is mandatory for anyone in this field. And if everybody
has it, where is the new technology? Where is the wow factor that can differentiate an
offer from another?
We think that Romanian SMEs can really make a move and transform their environment
into a digital one. This policy is very well applicable for agro-tourism too, everything
could be done, presented and set up digitally, with the clients switching to an “old style”
vacation when they arrive – but with every detail already established and clear, setup by
using the most modern IT apps. Also, Romanian SMEs in the field of tourism can rely on
different innovations not for directly attracting customers, but also for improving logistics
and other support areas of their business – for example, they can group together, build a
small wind farm and use the electric power generated by them for their own business [13].
One of the things that can be done, and in quite a short period of time too, is the
integration of augmented reality into the electronic presence of tourism SMEs in
Romania. While this approach requires a lot more effort on the side of a large enterprise,
at SME level is perfectly doable. Imagine an SME with a small hotel, somewhere in the
mountains of Romania, that has the entire hotel, with all rooms, public spaces and
facilities mapped inside an AR application. One of the first things people consider when
going to a vacation is accommodation – it would be great to have the possibility the see
the exact reality of each room, to see the view from every room, to see the bowling area,
the spa, the bar, the terraces etc. Going a step beyond, the same thing can be done for the
main attractions and points of interest in the immediate area (restaurants, museums, skiing
slopes, mountain biking, climbing etc). The public transportation in the area should also
benefit from a dedicated AR app that can reliably show you on your laptop or mobile
phone when the train or the bus is arriving, the schedule for the next days or the
meteorological alerts that would prevent driving for example.
One of the best practices in this area was established by Holiday Inn when they
announced their first Augmented Reality Hotel, in 2012. Their guests could see the 2012
London Olympics athletes going through the hallways and they could even take a picture
with their favorite sport stars in their own hotel room

91
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. AR app of Holiday Inn showing 2012 Olympics athletes in the lobby of the hotel 1

All small accommodation enterprises could build their own AE mobile apps that can
vastly improve the experience of their guests, from directions to different locations to
information and holographic-like guides for using different services within the hotel.
Another area where an AR approach could bring something hype, is the food services
field. By using AR the clients can go through different themes and choose one for their
reserved table, the app could help translate the menu, the plates can be seen realistically
before ordering and maybe even during the cooking process with a life video stream from
the kitchen area.

Figure 2. AR app of Inamo restaurant, London, UK 2

Another AR implementation that would really appeal to the tourism SMEs clients is
related to the re-enactment of historical places.

1
Source: https://thinkdigital.travel
2
Source: https://thinkdigital.travel

92
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 3. AR app reconstructing Rome historical places1

Instead of seeing the ruins of the Tomis fortress, on the Black Sea coast, the tourists could
see the ancient buildings, the walls and markets, maybe with in-app purchase possibility
to order a juice from the ancient Greek vendor you see on the tablet or smartphone,
delivered by a Romanian employee directly to the customer, based on the GPS readings of
his device. The possibilities of introducing AR are actually limitless, only time and money
required for different projects making the difference – something is feasible, something it
is not yet worth the money invested in it.
Of course, all these innovative IT applications would require funding but we think that the
benefit from these innovative approaches would definitely out weight the costs. By using
this approach on a large scale we could also re-interpret the Romanian tourism SMEs as
being at the fore-front of technological advances, thus providing an image that can help
increase the prestige of the industry, bringing more and more customers.
We consider that instead of focusing on a relatively small numbers of low expenses
incomers for agro-tourism (mainly, but the discussion can be up scaled to all areas), our
domestic SMEs active in the tourism field could also prepare and launch an “offensive”
for the also relatively small but statistically high expenses customers that could be
attracted by the top edge of technology being used in the tourism services. The added
value, on medium and long term, would definitely be higher than now and this
development path would also be based on the IT software industry that is available here,
in Romania. At a future step, we could also start developing, building and selling
specialized hardware/software packages for the tourism businesses, both domestic and
abroad.

4. CONCLUSIONS

Romania has a huge potential, insufficiently exploited yet. Romania is ranked 76 in the
World Top of Competitiveness in Tourism, between Azerbaijan and El Salvador, according
to a World Economic Forum study. The low competitiveness of the Romanian economy can
be, at least partially, explained through the lack of maturity for our entrepreneurship culture.

1
Source: http://www.soloroma.com

93
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Thus, we consider that our country should focus on innovation, because it is a demonstrated
fact that innovative products and services, developed by skilled entrepreneurs, represent the
key of success for improving the entrepreneurs public image and the added value amounts.
The inconclusive correlation between weight of added value and demographic potential of
Romania reflects the large drawback in developments, productivity and competitiveness of
Romanian SMEs, according the the Post-Privatization Foundation reports. The global world
crises generated negative effects in Romania also, the dynamics of SMEs activity being
strongly affected by factors such as: major financing difficulties, financial blockings due to
late payments, reduced demand at domestic and international levels, low level of absorption
of EU funding. Nevertheless, the main obstacles for development of SMEs in Romania,
according to a study by National Council of Private Small and Medium Enterprises in
Romania, are the legislation framework, corruption, excessive bureaucracy, banking
policies and insufficient professionalism of the Government and Parliament to counter the
effects of the crises [11].

REFERENCES

[1] Rusu S., Antreprenoriat în turism şi industria ospitalităţii, Editura CH Beck,


Bucureşti, 2014
[2] Fundaţia Post-Privatizare, IMM-urile româneşti în Uniunea Europeană
[3] European Union- Annual Report on European SMEs 2014/2015, Editor Karen
Hope, 2015
[4] World Economic Forum, The Travel & Tourism Competitiveness-Report 2015,
Geneva, 2015
[5] UNWTO, Tourism Highlights 2015
[6] European Commission, A Partial and Fragile Recovery, Annual Report on
European SMEs 2013/2014, European Union 2014
[7] CIAPE, Analiza competenţelor HORECA, 2013, http://responsalliance.eu/wp-
content/uploads/2014/11/HORECA-SKILLS-ANALYSIS_RO.pdf
[8] Hollick M., Braun P., Lifestyle Entrepreneurship: The unusual nature of the tourism
entrepreneur, p.3 http://www.cecc.com.au/clients/sob/research/docs/pbraun/AGSE-
2005_1.pdf
[9] Ernst & Young, Antreprenorii vorbesc-Barometrul antreprenoriatului românesc
2013,
http://www.ey.com/Publication/vwLUAssets/Study_ESO_Barometer_2013_Feb_20
14/$FILE/Antreprenorii%20vorbesc_Barometrul%20antreprenoriatului%20romane
sc%202013.pdf
[10] Pîslaru D. (coordonator), Modreanu I., Contribuţia IMM-urilor la creşterea
economică –prezent şi perspective, Proiect- Îmbunătăţirea capacităţii instituţionale,
de evaluare şi formulare de politici macroeconomice în domeniul convergenţei
economice cu Uniunea Europeanăa Comisiei Naţionale de Prognoză, cod SMIS
27153, Comisia Naţională de Prognoză

94
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[11] CNIPMMR, Carta Albă a IMM-urilor 2016, p.384


[12] Tabusca, Silvia Maria - "The Internet Access as a Fundamental Right"; published in
“Journal of Information Systems and Operations Management”, Vol.4. No.2 / 2010,
pp 206-212, ISSN 1843-4711.
[13] Lungu Ion, Carutasu George, Pîrjan Alexandru, Oprea Simona-Vasilica, Bâra
Adela, A Two-step Forecasting Solution and Upscaling Technique for Small Size
Wind Farms located in Hilly Areas of Romania, Studies in Informatics and Control,
Vol. 25, No. 1/2016, pp. 77-86, ISSN 1220-1766

95
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

TEACHING SOFTWARE PROJECT MANAGEMENT: THE COLLABORATIVE


VERSUS COMPETITIVE APPROACH

Gabriela-Angelica Mihalescu 1*
Alin-Gabriel Gheorghe 2
Costin-Anton Boiangiu 3

ABSTRACT

The implementation and development process of a software project consists of a cycle


distributed in several stages which represent the lifecycle of the project. There is a
research done by Standish Group 13 years ago which stands that only 16% of software
projects are done with success, 53% have flaws and bugs in them and 31% are canceled.
Considering this problem, the subject of this article is the comparison between the
collaborative and competitive approaches, considering a team involved in the
development of a software project.

KEYWORDS: Collaborative Approach, Competitive Approach, Software Project


Management, Teaching Strategies, Mixed Collaborative Competition Learning

INTRODUCTION

"Let us put our minds together...and see what life we can make for our children." Sitting
Bull
Nowadays, the learning methods have evolved so much that the college teachers have a
lot of tools to make their students passionate and interested in the courses that they are
teaching. Either they use formal or informal education, the teachers have the goal of
developing the abilities and technical knowledge of their students in order for them to
succeed.
The goal of the universities is to prepare their students to enter the industrial market and
be able to succeed. The IT companies listed on the market need students that have
knowledge in every IT field.
The students try their best to gain a little piece of knowledge from every course they study
in college and in order to achieve that, they have to work hard and practice alone.
Alongside the technical, engineering part, the students need to develop the ability to work
as a team, the ability to communicate efficiently and the ability to deliver before the
1
* corresponding author, Student, Politehnica University of Bucharest, 060042 Bucharest, Romania,
mihalescu.gabriela.angelica@gmail.com
2
Student, Politehnica University of Bucharest, 060042 Bucharest, Romania, gheorghealingabriel@gmail.com
3
Professor PhD Eng., Politehnica University of Bucharest, 060042 Bucharest, Romania,
costin.boiangiu@cs.pub.ro

96
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

deadline. These needs can be fulfilled since college if the college gives the students the
opportunity of working together.
Through this article, the authors want to identify the main characteristics of two
approaches: collaborative and competitive. They want to discuss the advantages and the
drawbacks based on their own experience.

PREVIOUS WORK

Researchers noticed that competition had indeed a positive impact on performance goals
and learning motivation in the classroom [10].
According to a study done by Barkley, E. F., Cross [3], the collaborative approach
represents the situation where the students work together in order to “achieve shared
learning goals”.
Barkley Cross identified which are the main activities for this approach:
a) For class discussion: ask the students some questions and let them think of an
answer based on arguments. After some time, group them into groups of two and
let them talk about the question. If they disagree, tell them they have to reach the
same conclusion.
b) For reciprocal teaching: group the students in groups of 4-5 and give them an
important field of technology. In a limited amount of time, they have to become
“experts” and teach their colleagues all they know about that subject. For
everyone to be productive, every student must have their own role: mediator,
spokesman, time keeper, note taker, etc.
c) For problem-solving: group the students in a group of 2-4 and give every group a
problem to solve. After a limited amount of time, every team must present the
problem and the solution they achieved. The other groups can ask questions or
can come with new ideas so that the final solution is the best solution.
d) For writing: the students will work in groups and they will analyze a subject in
order to write a paper. Every student will come with their own ideas which will be
written on paper. After the individual work, they have to face their teammate’s
ideas and present them in front of their colleagues. The spectators will ask
questions and make suggestions. The presenting team will correctly receive
feedback from the spectators and improve their article.
Regarding the competitive approach, researchers reached the conclusion that the
collaborative approach brings better results than the competitive one, but there are
situations in which this is not true.
Johnson and Johnson (1999) [5] have analyzed and identified that there are some
constructive effects of the competitive approach in the following situations:
a) when there are rules and clear criteria for winning (if the rules are fuzzy, the
chances for the competition to fail are bigger)
b) tasks to be done are easy and simple

97
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

c) there are no dependencies between activities


d) every competitor has equal chances of winning
e) the prize is not that important

PROPOSED APPROACH

In order to analyze the two approaches, we looked for college situations where the
students have the chance to work in a team or compete with others. We consider it as a
relevant environment to study because the future engineers learn how to work in a
competitive or collaborative way since college.
For the collaborative approach, we followed projects where the teamwork and team
communication were more important than the results, so we chose the software projects
from “Software Project Management” course.
The competitive approach implies situations where there is a stake for which students are
competing.
Also, to find the best student, it must be a clear separation based on strong rules. In this
case, we identified the projects developed on the “Artificial Intelligence” course.

Collaborative approach

The “Software project management” team tried every year to use different approaches in
the projects requested from students, in order to demonstrate the advantages and the
disadvantages of these approaches. Current year (2017), the course team chose to go with
the collaborative approach and focus on the steps of a project lifecycle, not just the final
result. The teamwork and the communication, the process of establishing the
specifications and design, the development and testing cycle were the only things that
mattered.
The project from the second half of the semester was to develop a single-player game,
called “Type the words!” which tested the player's’ abilities of typewriting. The
technology, the architecture, the milestones and the tasks were chosen by every team of
students. Since the very beginning of the project, students concluded that for this project
to work they would have to collaborate efficiently. By communicating and exposing all of
their ideas to the team, by debating every opinion, they had to reach a common point.
Students walked the project to every step of a normal project lifecycle:
1. Project initialization: using tools of informal education (brainstorming, debates,
votes, etc.), students analyzed the project's requests and chose the best possible
programming language in which the project will be developed, the test scenarios
and use cases. Every point here concluded in an SDD (Software Design
Document).
2. Project planning: after establishing all the details, students chose the responsible
people and the deadline for every task, such that they obtained a plan and a Gantt
diagram of the project. Using Microsoft Project, they identified the activities on
the critical path and treated them carefully, they were able to modify every

98
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

resource based on availability, they saw in a graphic manner the involvement of


everyone and they used this as a tool for better time management.
3. Project implementation: developing features, defining test scenarios, defining use
cases, testing the project, bug fixing, weekly meetings and final presentation.
4. Project monitoring: done by watching the Microsoft Project planning and through
other tools like Git. Other tools for monitoring the project were the weekly
meetings and online discussions.
5. End of project: there was a final presentation in front of the other teams when
students received questions and feedback and presented a demo. The constant
feedback received from the assistant was helpful because it made the students
more efficient and better motivated.
A proof of constant collaboration between the students’ team members can be seen in the
charts below, charts taken from Gitlab platform, which shows how the project has been
modified through time:

Figure 1. This figure shows the involvement of all the 6 members of a team who worked from
November 17 to December 21 with an average of 1 commit per day

For every commit inside the master branch, there was a merge request to solve. Every
team member has done his/her work inside a personal branch, making sure that everything
was functional before committing the code inside the master branch.
Another advantage of this way of working was that the person responsible for the merge
request could give the committer feedback about the readability or even bugs that could
pop up. Alongside this, the face to face or online debates helped students clear their
thoughts and develop a cleaner and meaningful code.
Another advantage of working in a team is that students learned from each other. For
example, they chose web programming as the main technology although there were
people which were not familiar with AngularJS. The people who didn’t know the
framework beforehand had the chance to learn something new that could be useful at
some point. Finally, the project used 70.83% Javascript, 24.98% HTML and 4.19% CSS
according to Gitlab statistics.

99
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Using the collaborative approach, students were able to simulate the lifecycle of a real
project and go through every step involved. In IT companies, every project is done
working in a team under a framework of Project Management (SCRUM, AGILE,
Waterfall), so this project prepared the students for the real life.
Alongside the environment for implementing the project, the practical class of “Software
project management” helped students bound with each other, know a little bit about
everyone, learn to listen and learn to accept other people’s opinion. This was something
new, something that the college didn’t teach them. The general impression from college
until then was that everyone should work individually on his homework, any attempt of
speaking or share thoughts about the homework with any other people could be
considered as plagiarism.
Concluding the collaborative approach part of the article, we would like to specify that
this course brought a lot of benefits to enrolled students and prepared them for a real IT
company life, helped them bound, make friends and develop their technical abilities.

Competitive approach

Competition is “a social process that occurs when rewards are given to people, on the
basis on how their performances compare with the performances of others, doing the
same task or participating at the same event. “ [9]
In order to discuss the competitive approach, we will choose as a subject of matter the
“Artificial Intelligence” course inside the Automatic Control and Computer Science
Faculty of “Politehnica” University of Bucharest.
In order to pass the class, the students need to sum up a certain number of points along the
semester. There are 3 homework projects which are published along the semester and
each of them is worth 100 points. There is a special amount of points called bonus points
(up to 20) which are given for the homework projects that stands out, measured by having
the best (minimum) execution time or the best highest score. Also, the bonus points are
given only to the first half of the leaderboard in a gradual manner: the first place would
get the maximum number of points, the second place would get the maximum number of
points minus a small percentage and so on.
We will present below the 3 homework projects to be performed by a student:
1. The first homework implied the generation of all possible texts, being given a
Morse code without separators. The difficulty of the homework consisted of
telling precisely where to place the space character in order for the Morse code to
become a natural language sentence. The bonus points were given for the best
minimum execution time of the program.
2. The second homework implied an algorithm for a cleaning robot. The difficulty
of conceiving such an algorithm was the fact that the robot had a certain amount
of substances in his inventory, substances which would be consumed when
cleaning a room. The bonus points were given for an algorithm which obtained
the best score in a certain amount of time.

100
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. The third homework implied an algorithm of clusterization of a very large set of


documents (with approximation 2000). The difficulty of the homework consisted
in choosing the best clustering algorithm and choosing the best algorithms of text
preprocessing. The bonus points were given for an algorithm that could get the
best purity percentage (the obtained clusters that reflect reality the best, have a
bigger purity percentage).
The competitive approach in this bonus points was a little bit hidden. Every student,
found himself in the situation of asking other colleagues: “What is your execution time
for test X?”, “What score does your robot obtain for test Y?”, “What is your purity for the
clusters?”. The answers to these questions would determine two situations:
1. One in which the time/score/purity would be lower than the student who asked; in
this case, the student who asked left relaxed.
2. One in which the time/score/purity would be higher than the student who asked;
in this case, the student who asked gets determined to continue the work on the
homework in order to achieve a better, more efficient application.
This would be the first advantage identified in the competitive approach: the students who
have a competitive spirit are determined to self-improve through this approach, in order to
beat the others and be the best. Usually, competitions bring prizes for their participants,
encourage them to play and try to hit the podium. In this course, the prizes were designed
only for those who knew they could do more. The students who only wanted to pass the
class were not even interested in the performance of their colleagues.
To determine the competitive spirit students, we analyzed the results of the first
homework for all the 73 students involved. The best minimum time was 0.006913
seconds, the biggest time was 15.480385 seconds and the average time was 3.249812
seconds. 52 students out of 73 obtained a better time than the average. From these results,
we can conclude that 71% of the participants of the “Artificial Intelligence” course have a
competitive spirit and did their best to win.

Figure 2. Competitive Students

The figure 2 shows how the 73 students, who solved the first “Artificial Intelligence”
homework, are separated. Only 21 students got a smaller than average time.
From our point of view, if more than 50% of the students participated in this competition
for the bonus points, we can say that the competitive approach reached his goal. Another

101
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

advantage of the competitive approach is that not only the main person is encouraged to
self-improve, but even the others.
Inside a team, the competition is not beneficial since it leads to forgetting the team
objectives. Forgetting the team objectives might lead to forgetting the personal objective.
In this case, it can lead to conflicts and misunderstandings between colleagues, and the
people with a high competitive spirit can lose their potential. Although, Triplett [11]
found that cyclists perform better when racing with or against other people, than alone.
According to the examples above, both the collaborative and the competitive approaches
are efficient. From our point of view, the collaborative approach is focused on people,
while the competitive approach is focused on knowledge. In a real life job, the best
approach possible is one that focuses on both the development of the employee
knowledge and the personal growth of the individual.
By looking for situations that represent a mix of both approaches, we found opportunities
for students in which they could work in a team competing against other teams.
First, we identified the hackathon concept. According to Wikipedia [8], a hackathon is an
event where multiple programmers and other people who participate in the software
development cycle (designers, project managers, etc.) work together in order to develop a
software project in a limited amount of time. The participants work in teams and obey the
rules in order to win a prize. Examples of such competitions organized by the
“Politehnica” University of Bucharest are Innovation Labs, eeStec Olympics, BEST
Engineering Marathon, IT Fest.
These competitions are created especially for students with a technical background, in
order to teach them how to work in a team for developing an IT project.

CONCLUSIONS

In this article, authors analyzed the collaborative and competitive approaches considering
real examples from our host university. In order to identify the advantages of the
collaborative approach, we used the software projects proposed by “Software project
management” course and for the competitive approach, we discussed the “Artificial
Intelligence” homework’s bonus system.
By doing this comparison, authors believe that a combination of these two approaches is
the best for IT projects. As [12] states: “One benefit of the competitive-collaborative
approach is that the failure of a team in the final functionality does not produce the failure
of the entire project, a scenario very likely for a large project built on a collaboration
basis”.
Although software companies are mainly focused on strategies that encourage the
collaboration, people are different and have different needs. Some people will feel better
if they are appreciated, accepted and useful, others consider that only the best succeed and
they act so. If we were to select one of the two approaches, we would choose the
collaborative approach because the competitive approach might be dangerous and hard to
manage.

102
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

For a competition to be fair, the participants must have equal knowledge so that the best
would win. In real life projects, the resources are like the pieces of a puzzle who sums up
to the project’s success. People work with other people to do things they cannot do on
their own, so this is why other people’s support is very important in reaching the project’s
purpose.
That being said, authors conclude by underlying the fact that the benefits of the
collaborative approach are more than the benefits of the competitive approach and we
hope that in both academic and professional environment there will be a balance between
the two, such that they reach both their benefits.

FUTURE WORK

The future work will focus on discussing and analyzing how the two approaches could be
integrated into the student projects, in order to determine students grow both personally
and professionally.
Also, authors will try to discuss with some of their university teachers the possibility of
creating homework that facilitates working in teams for the first year students. Also, we
will analyze the possibility of creating homework which has a small bonus part (with
approximation 20%) obtained for the best performances in order to encourage the students
with high competitive spirit, as the “Artificial Intelligence” course’s homework did.
Another future work area will be to focus on the collaborative approaches mixed with
competitive approaches as discussed in voting-based strategies [20][21][22][23].

REFERENCES

[1] Dr. Ranee Kaur Banerjee, “The Origins of Collaborative Learning”


[2] The Standish Group Report. Available: https:// www.projectsmart.co.uk/ white-
papers/ chaos-report.pdf. [Accessed 7 February 2017]
[3] Barkley, Cross, Major, “Collaborative Learning Techniques: A Handbook for
College Faculty”, 2005
[4] David E. Rumelhart, “Feature Discovery by Competitive Learning”
[5] David W. Johnson, Roger T. Johnson, “Learning Together And Alone: An
Overview”, University of Minnesota
[6] First “Artificial Intelligence” homework results (course website available at
HTTP:// turing.cs.pub.ro/ ia_10/ )
[7] 44 benefits of collaborative learning. Available: https:// www.gdrc.org/ kmgmt/ c-
learn/ 44.html. [Accessed 7 February 2017]
[8] Hackathon definition. Available: https:// en.wikipedia.org/ wiki/ Hackathon.
[Accessed 7 February 2017]
[9] Simon Attle, Bob Baker, “Cooperative Learning in a Competitive Environment:
Classroom Applications”, 2007, volume 19, Number 1

103
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[10] S. Lam and P. Yim and J. Law and R. Cheung. “The effects of competition on
achievement motivation in Chinese classrooms.” British Journal of Educational
Psychology, 74(2), 2004.
[11] N. Triplett, “The Dynamogenic Factors in Pacemaking and Competition”. The
American Journal of Psychology, Volume 9, Issue 4, July 1898, pp. 507-553.
[12] Costin-Anton Boiangiu, Alexandru Constantin, Diana Deliu, Alina Mirion, Adrian
Firculescu, "Balancing Competition and Collaboration in a Mixed Learning
Method", International Journal of Education and Information Technologies, ISSN:
2074-1316, Volume 10, 2016, pp. 51-57
[13] Costin-Anton Boiangiu, Adrian Firculescu, Nicolae Cretu, "Combining
Independence and Cooperation as One Anarchic-Style Learning Method",
International Journal of Systems Applications, Engineering & Development, ISSN:
2074-1308, Volume 10, 2016, pp. 97-105
[14] Costin Anton Boiangiu, Adrian Firculescu, Ion Bucur, “Teaching Software Project
Management: The Independent Approach”, The Proceedings of Journal ISOM, Vol.
10 No. 1 / May 2016 (Journal of Information Systems, Operations Management),
pp 11-28, ISSN 1843-4711
[15] Costin Anton Boiangiu, Adrian Firculescu, “Teaching Software Project
Management: The Competitive Approach”, The Proceedings of Journal ISOM, Vol.
10 No. 1 / May 2016 (Journal of Information Systems, Operations Management),
pp 45-50, ISSN 1843-4711
[16] Costin Anton Boiangiu, Ion Bucur, “Teaching Software Project Management: The
Collaborative Approach”, The Proceedings of Journal ISOM, Vol. 10 No. 1 / May
2016 (Journal of Information Systems, Operations Management), pp 134-140,
ISSN 1843-4711
[17] Adrian Firculescu, Ion Bucur, “Teaching Software Project Management: The
Anarchic Approach”, The Proceedings of Journal ISOM, Vol. 10 No. 1 / May 2016
(Journal of Information Systems, Operations Management), pp 92-98, ISSN 1843-
4711
[18] Adrian Firculescu, “Teaching Software Project Management: The Mixed
Collaborative-Competitive Approach”, The Proceedings of Journal ISOM, Vol. 10
No. 1 / May 2016 (Journal of Information Systems, Operations Management), pp
168-174, ISSN 1843-4711.
[19] Gabriela Bajenaru, Ileana Vucicovici, Horea Caramizaru, Gabriel Ionescu, Costin-
Anton Boiangiu, "Educational Robots", The Proceedings of Journal ISOM Vol. 9
No. 2 / December 2015 (Journal of Information Systems, Operations Management),
pp. 430-448, ISSN 1843-4711
[20] Costin-Anton Boiangiu, Radu Ioanitescu, Razvan-Costin Dragomir, “Voting-Based
OCR System”, The Proceedings of Journal ISOM, Vol. 10 No. 2 / December 2016
(Journal of Information Systems, Operations Management), pp 470-486, ISSN
1843-4711

104
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[21] Costin-Anton Boiangiu, Mihai Simion, Vlad Lionte, Zaharescu Mihai – “Voting
Based Image Binarization” -, The Proceedings of Journal ISOM Vol. 8 No. 2 /
December 2014 (Journal of Information Systems, Operations Management), pp.
343-351, ISSN 1843-4711
[22] Costin-Anton Boiangiu, Paul Boglis, Georgiana Simion, Radu Ioanitescu, "Voting-
Based Layout Analysis", The Proceedings of Journal ISOM Vol. 8 No. 1 / June
2014 (Journal of Information Systems, Operations Management), pp. 39-47, ISSN
1843-4711
[23] Costin-Anton Boiangiu, Radu Ioanitescu, “Voting-Based Image Segmentation”, The
Proceedings of Journal ISOM Vol. 7 No. 2 / December 2013 (Journal of
Information Systems, Operations Management), pp. 211-220, ISSN 1843-4711.

105
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

MANAGING GRAPHICS PROCESSING UNITS' MEMORY AND ITS


ASSOCIATED TRANSFERS IN ORDER TO INCREASE THE SOFTWARE
PERFORMANCE

Alexandru Pîrjan 1*

ABSTRACT

This paper is focused on analyzing a key aspect, the management of the Graphics
Processing Units' (GPUs) memory, which is of paramount importance when developing a
software application that makes use of the Compute Unified Device Architecture (CUDA).
The paper tackles important technical aspects that can affect the overall performance of a
CUDA application such as: the optimal alignment in memory of the data that is to be
processed, obtaining optimal memory access patterns that facilitate the retrieving of
instructions; aligning to the L1 cache line according to its size, taking into account the
balance achieved between single or double precision and the effect on how much memory
is being used; joining more kernel functions into a single one in certain situations,
benefiting from the increased speedup offered by putting into use the shared and cache
memory, adjusting the code to the available memory bandwidth by taking into account the
memory latency and the need to transfer data between the host and the device.

KEYWORDS: CUDA, GPU, Memory, Kernel Function, Software Performance

1. INTRODUCTION

One of the most important technical aspects that affect the performance of most software
applications are the memory bandwidth and latency. The memory bandwidth measures
the quantity of data that can be transferred to or from a certain point in a certain amount of
time. The memory latency targets the time needed for an operation to respond to a certain
request and have the data available. In the case of the Graphics Processing Units that offer
support for the Compute Unified Device Architecture, the appropriate management of
these two quintessential technical aspects affects to a great extent the performance of the
developed application.
Many scientific articles have made determined efforts to devise optimization solutions
that target the Compute Unified Device Architecture enabled graphics processing units
[1], [2], [3]. The general purpose graphics processing unit (GPGPU) offers new
possibilities to overcome the limitations of traditional processors by offering a huge
parallel processing power potential, which can be harnessed to optimize data processing.
Achieving data processing at high speeds with low costs is of great importance in a large

1
* corresponding author, Lecturer PhD, Faculty of Computer Science for Business Management, Romanian-
American University, 1B, Expozitiei Blvd., district 1, code 012101, Bucharest, Romania, alex@pirjan.com

106
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

number of applications: in processing electronic payments [4], [5], in developing


cryptographic algorithms [6], in implementing high-performance web solutions [7], in
complex office solutions [8], in Artificial Neural Networks [9], [10], in Resource
Description Framework query languages [11].
The classical central processing unit implements a memory model that is described in the
literature as being linear and flat [12]. This model implies that all of the central processing
unit’s cores have almost unrestricted access to memory, no matter where it resides. Most
of the modern central processing units offer three levels of cache memory: L1, L2 and L3.
Programmers who focus on developing optimization solutions for the central processing
unit have a good knowledge of these three levels and make an extended use of their
characteristics.
In spite of this, there are a lot of programmers who regularly or frequently are ignoring
these types of memory when developing their applications, mainly due to the tendency
that has been induced in recent years by modern programming languages. These
languages, in the desire to achieve as much abstraction as possible, have a tendency to
detach the programmer from the hardware. Time has proven that this approach often
results in an increased productivity, because evolved compilers deal with the task of
abstracting, thus allowing problems to be tackled without losing precious time with this
matter. But the current state of technology makes it mandatory to fully comprehend the
hardware and its features in order to achieve the highest possible level of performance
when harnessing the parallel processing power of an architecture.
The graphics processing unit offers more types of memory that can be used to store and
retrieve data. Each of these memory types differ in terms of performance (bandwidth,
latency). The CUDA threads can access data from multiple memory addresses during the
execution. Each thread contains a local private memory area. Each thread block offers a
shared memory area accessible by all the threads within the block. The lifetime of the
shared memory is the same with the lifetime of the thread block, in which it resides. All
the threads have access to the same global memory. The architecture also provides two
supplementary memory types that are read-only and are accessible by all the threads: the
constant memory and the texture memory. The lifetime of the global, constant and texture
memory is the same with the kernel function's lifetime [13].
In what concerns the memory hierarchy, each streaming multiprocessor (SM) contains a
set of 32-bit register memory, together with a shared memory area that can be easily
accessed by every core of the streaming multiprocessor, but it is not visible to the other
streaming multiprocessors. The size of the shared memory varies depending on the
graphics processor architecture and the available number of registers. Besides the shared
memory, a streaming multiprocessor offers two levels of cache memory, one for texture
and another one for constants [14].
In order to improve the software performance for a CUDA developed application, the
programmers must optimize the number of active threads and to balance their memory
resources: the number of registers and threads used per multiprocessor, the global
memory bandwidth and the percentage of memory allocated to each thread. When one
develops a CUDA application, he must employ a progressive approach, programming his

107
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

application in the first phase using only global memory. In a second phase, he can
consider implementing shared, constant, register and other types of memory.
In this paper there are analyzed important technical aspects that can influence the overall
performance of an application developed for CUDA enabled GPUs: the increased speedup
offered by putting into use the shared and cache memory; the alignment of data in
memory; optimal memory access patterns; aligning to the L1 cache line; the balance
achieved between single or double precision and its effect on memory usage; joining more
kernel functions into a single one; adjusting the code to the available memory bandwidth
in accordance with the memory latency and the necessity to transfer data between the host
and the device.
In the following, the mechanism of caching data is analyzed on both the central and the
graphics processing unit, highlighting their common characteristics as well as their main
differences.

2. AN ANALYSIS OF THE CACHE MECHANISM

In the case of programs that process data sequentially, especially in the case of sequential
programs that make extensive use of cycle operators, after a certain function has been
called, the odds are high that in the near future the function will be used again. The
chances are very high for certain memory locations to be used frequently for storing or
retrieving information. The principle of locality states that it is very probable that one will
need to access again, in a short time-frame, a certain memory address or a certain portion
of a source code after having already accessed it once [12].
When compared to the processing speed of a central processing unit, the Dynamic
Random-Access Memory (DRAM) performance lags several orders of magnitude behind.
If there had not been for the cache memory to compensate the low speed of the DRAM,
most part of the central processing unit’s processing power would have been wasted as
the processor would be bound to the DRAM’s bandwidth and latency. The same situation
stands true for the graphics processing unit as some memory processes are insubstantial
when compared to the necessary minimum resources that can be allocated to handle them,
thus resulting in a reduced peak memory efficiency.
The cache memory actually consists in a type of memory that offers very high speeds and
is situated in close proximity to the processing core. The production process of this type
of memory is very expensive and influences the final selling price of the processor.
Depending on the types of cache memory, the sizes and speeds can vary significantly. For
example, the L1 cache has a size of around 16 to 64 K and offers the highest level of
performance. This type of cache memory is assigned to a certain core of the central
processing unit. The L2 cache's size varies between 256 and 512 K, but it operates at a
lower speed than the L1 cache. The L3 cache has been introduced most recently, having a
size of a few megabytes and just like the L2 cache it can be used by more processor cores
or assigned to specific ones. On typical central processing units, the L3 cache is allocated
among the processor cores, thus facilitating the communications between the cores at high
speeds.

108
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In the case of the CUDA-enabled GPUs, the Fermi architecture brought for the first time a
portion of cache memory which is entirely hardware managed. This architecture also
introduced a L1 cache associated with each streaming multiprocessor that offers the
possibility to be managed by the programmer or automatically, by the hardware. Along
with this L1 cache, the architecture also implemented a L2 cache that is allocated among
the streaming multiprocessors. This addition facilitates the communication between the
threads belonging to the same processor, not having to use the low speed global memory
when sharing small amounts of data. The atomic operations benefit a lot from the
introduction of this type of memory, as a homogenous value at a certain memory address
is available to the streaming multiprocessors. Up till this type of memory was introduced
a streaming multiprocessor had to store data into the low speed global memory and
retrieve it afterwards in order to be certain of the cohesion among the CUDA cores.
Developers are primarily monitoring the global memory’s bandwidth while memory
latency is compensated on the Compute Unified Device Architecture by invoking threads
originating in different warps. Like it has been depicted before, a CUDA device
implements more types of memory, each type having a distinctive scope, different
lifetimes within the application and its unique caching characteristics. The global memory
is represented by the device’s Dynamic Random Access Memory (DRAM) and is
especially used for transferring data among the host and the device in addition to inserting
data and retrieving the output results of a kernel.
The qualifier global ascertains the scope of the memory, meaning that this type of
memory is accessible by both the device and the host system. In order to declare a
variable as being stored in global memory, one can use the “__device__” qualifier or to
allocate it dynamically with the instruction “cudaMalloc()”. The lifetime of this type of
memory is equivalent to the lifetime of the application, a variable that has been declared
using the global memory qualifier exists during the lifetime of the application. For
devices offering support for certain compute capabilities the global memory can be stored
in the cache memory of the chip.
The process of grouping the threads into warps is beneficial to the processing
performance as it optimizes the inserting and retrieving of data in global memory. Global
memory is coalesced by the device, thus reducing the number of transactions and
optimizing the bandwidth. Different Compute Unified Device Architectures determine
different sizes for the memory transactions. For example, on older compute 1.x
compatible devices the size of the coalesced memory transaction begins at 128 bytes for
each memory access. Afterwards, this value will be cut down to 64 or 32 bytes if the
memory area that is being accessed has a low dimension and resides in the same aligned
block of threads, having the dimension of 32-bytes. Because this memory is not cached, it
would have affected negatively the bandwidth if the threads had not accessed memory
addresses that are consecutive.
Therefore, if a programmer does not pay careful attention at how different types of
memory are used, he risks losing most of the potential bandwidth he could have obtained
on the graphics processing unit. In contrast with compute 1.x compatible devices, on more
advanced CUDA architectures, the technical possibilities are enhanced. Devices based on
the Fermi architecture retrieve memory in transactions varying in size between 32 and 128

109
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

bytes. It is not offered support for sizes of 64- bytes, implicitly each memory transaction
has a 128-byte cache size. The main advantages resulting from this enhancement consist
in the fact that strided access within the bounds of 128 bytes is cached and another
retrieve from memory is no longer necessary. The whole CUDA programming model
starting with the Fermi architecture has become significantly simpler than the one
implemented by the preceding architectures.
The number of ongoing transactions is a significant factor that affects the overall
performance of a CUDA application. The memory transactions are queued and afterwards
are executed one by one. This mechanism implies a consumption of the processing
resources. It is more efficiently for a thread to perform a read of three integers in a single
pass than to perform three separate reads. According to the official NVIDIA
documentation, in order to reach the peak bandwidth for Kepler and Maxwell, the
programmer has to employ several strategies. One strategy consists in filling the processor
up with warps until the near full occupancy is reached.
Another strategy consists in performing 64 or 128 bit reads using vector types like
float2/int2 or float4/int4, the occupancy would be lower than in the case of the first
strategy, but the memory bandwidth would still reach its peak. Practically this strategy
issues less transactions of greater sizes that can be processed more proficiently by the
equipment. The use of vector types also inserts a specific instruction-level parallelism due
to the fact that a thread processes more elements. Nevertheless, the programmer must pay
particular attention to the fact that the use of vector types is essentially connected with an
alignment of 8 and 16 bytes, depending on the used data type. In [12] it is stated that
processing four elements per thread leads to an improved performance due to a moderate
register usage, an increased opportunity for instruction level parallelism and higher
memory bandwidth.

3. AN ANALYSIS OF THE LIMITING FACTORS

The memory latency and memory bandwidth, as well as the latency existing at the
instruction level are significant causes for capping the kernel functions performance. The
programmer must identify exactly what causes the deficiency and solve it so that the
performance can reach its full potential. In order to identify the key points in the code that
must be addressed the programmer must analyze the existing arithmetic instructions. One
to one assignment provides satisfactory results if the outputs have a one to one
correspondence with the inputs. After performing this analysis, if the percentage of
processing time is spent mostly on arithmetic operations, then the application is
arithmetically bound. However, if the overall performance remains unaffected, the
application is memory bound [12].
One way of solving the issues is to use the Parallel Nsight or Nsight Eclipse Edition tool
to profile the kernel function (Figure 1).

110
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. An example of profiling a kernel function using the Nsight Eclipse Edition 1

By applying the Analysis function, one should inspect the report regarding the statistics of
instructions. If the report signals that the kernel's memory pattern exhibits poor coalescing
and the graphics processing unit has to process the instruction stream serially in order to
offer support for scatter memory reads or writes, one has to correct these issues.
First the programmer should try to change the allocation of the memory pattern so that the
graphics processing unit is able to coalesce, according to the thread, the access pattern.
The best possible case would be when the data pattern generates an access pattern that is
column based according to the thread. If the data pattern cannot be repositioned, the
programmer can try to modify the thread pattern by using the threads to store the data that
is about to be processed in shared memory. By using this technique, the programmer does
not have to worry about the process of coalescing when using the shared memory.
Another solution consists in enlarging the dimension of the output dataset, thus obtaining
a higher number of processed elements per thread. This technique has the potential to
improve both the arithmetic and memory bound situations. It is best to duplicate the code
and avoid inserting a new loop within the thread. The read instructions should be
positioned at the beginning of the kernel function so that they have time to gather the data
before being used. This mechanism will rise the amount of needed registers, so it is
important to closely monitor the amount of warps that have been involved, in order to
assure that their number is not cut down.
In the case of arithmetic bound kernels, the programmer should analyze the generated
Parallel Thread Execution (PTX) code. The programmer must weigh the advantages and
disadvantages of unrolling the existing loops and he can also activate floating-point
arithmetic on 24-bits, that can greatly improve the overall performance.

1
The Figure has been downloaded from the official Nvidia documentation site
https://developer.nvidia.com/nsight-eclipse-edition , accessed on 09.29.2016, at 21:00.

111
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

4. AN OPTIMAL HANDLING OF THE MEMORY


Obtaining a high level of performance in a CUDA developed application is often
conditioned by obtaining the right memory pattern. Central processing unit applications
have a tendency to organize the data in memory by rows. Although the latest CUDA
architectures can handle noncoalesced memory operations, older devices cannot. The
developer must try to obtain, for both the global and shared memory, a pattern that is
accessed in columns by successive threads. In the case of the "cudaMalloc" function, the
alignment process must not be performed as the function generates memory by grouping
it in aligned blocks of 128-byte size.
If the programmer needs to use in his application a structure that stretches over this limit,
he can insert padding bytes/words directly into the structure or he can call the
"cudaMallocPitch" function instead. The alignment of memory determines if there is a
need to fetch the transactions or cache lines. There are situations when one can access a
structure containing a certain header at the beginning and the first thread will process a
memory address with a different offset than zero [12]. In this situations a single fetch
instruction will not be enough to provide the data to all the 32 threads of the warp.
Moreover, a supplementary transaction will be generated, having a 128-byte size. The
succeeding warps will be affected by this problem like a domino effect, the whole
memory bandwidth being reduced in half due to a single header at the beginning of the
structure. On older devices compatible with the compute 1.x, the supplementary fetch
instruction is simply disposed instead of being used to fill in advance the cache memory.
A possible solution to this problem would be to explicitly store the header in a different
memory address, thus making it possible to align the memory for subsequent operations.
Another solution consists in the addition of padding bytes within the structure in order to
assure the desired alignment of the header as to correspond to the 128-byte limit. If the
structure is not being used afterwards to produce an array, the repositioning of its
elements will suffice. After assuring that the threads are in accordance with the optimal
memory pattern, the bandwidth will increase significantly. Regarding the read and write
instructions, several reads originating from the same address will provide good results
from the performance point of view as the graphics processing unit will allocate the value
to the following threads of the warp without having to use supplementary memory fetch
instructions. The same cannot be stated about several write instructions to the same
memory address, as in this case the write instructions must be serialized.
Another practice in order to achieve optimal alignment consists in inserting a padding value
that does not affect the outcome of the processing in any way. For example, when adding up
values over an array, the padding with the zero value will have no effect on the final result but
it can help to obtain the desired memory pattern and assure the optimal execution path within
the warp. When the programmer works with bins that have a fixed size, it becomes easier to
generate the dataset and manipulate it in columns instead of rows. In order to obtain coalesced
operations in regards to the global memory, one can create a buffer using the shared memory.
An important metric that must be taken into account for when developing Compute
Unified Device Architecture applications is represented by the ratio between the number
of memory operations and the number of arithmetic ones. In the scientific literature, it has
come to the conclusion that this ratio should be at least 10 to 1 [12], meaning that for each
fetch instruction issued to the global memory there have to be processed ten or even more

112
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

opperations. Theese operationns can be of different


d typess, ranging from computing g the arrays
inndexes, branchhes, evaluatinng conditionaal expressionss, computing loops etc. Eaach of these
opperations shouuld influencee positively thhe final resultt, moreover thhe loop instru
uctions that
inn most of the cases should be unrolled, otherwise theey tend to connsume a lot of resources
annd generate ann overhead off instructions.
Frrom the arcchitecture pooint of view w, one can observe thaat within a streaming
m
multiprocessorr there are im mplemented special odd aand even disspatchers thatt issue the
w
warps to multiiple cores. Thhe latest Pasccal GP100 sttreaming mulltiprocessor architecture
a
brrings significcant improvvements in areas such as the Com mpute Unifieed Device
A
Architecture co ores occupanncy level, thee performancee per watt, thhus ensuing noteworthy
n
ennhancements that translatte in an incrreased overaall performannce when co ompared to
prrevious archittectures. The Pascal streamming multiproocessor includdes 64 Compu
ute Unified
D
Device Architeecture cores that have a single precission FP32 (F Figure 2). Th
he previous
CUUDA architeectures incorpporate 128 coores in a Maaxwell stream ming multipro
ocessor and
1992 cores in a Kepler
K one, inn both cases the
t precision being FP32.

Figure
F 2. An overview
o GP100 SM arcchitecture 1
of thee latest Pascal G

Thhe streaming g multiprocesssor of the latest Pascal GP100 archhitecture com mprises two
prrocessing blocks that offerr Compute Unified
U Devicee Architecturre processing cores with
322 single preciision, two insstruction bufffers, two warpp schedulers and four disppatch units,
tw
wo per each processing
p bloock (Figure 2).2 Even if thhe streaming multiprocesso
m or from the
Paascal architeccture containss only half thee number of ccores when coompared to thhe previous
M
Maxwell archiitecture, the Pascal archiitecture streaaming multipprocessor preeserves the

1
T
The Figure has been
b downloadedd from the officiial Nvidia docum mentation site
htttps://devblogs.nv
vidia.com/paralllelforall/inside-ppascal/ , accessedd on 09.30.2016, at 14:05

113
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

similar size for the register file and can sustain the same level of occupancy regarding the
thread blocks and warps. The Pascal GP100 streaming multiprocessor provides the same
amount of registers like in the cases of Maxwell GM200 and Kepler GK110 architectures,
but with a major difference: the Pascal architecture offers as a whole considerably more
streaming multiprocessors, consequently putting forward more registers than the previous
CUDA architectures had to offer.
When developing applications targeting the Pascal Architecture, the programmer must
make use of the fact the Pascal graphics processing unit’s threads can access far more
registers than on previous architectures and there are more thread blocks, threads and
warps available that can run concurrently. The higher number of streaming
multiprocessors also determines an increased overall shared memory size and the
aggregate bandwidth of the shared memory is in fact more than two times higher than on
the previous Maxwell architecture. All of these aspects make the Pascal GP100 streaming
multiprocessor more efficient when processing the source code as the scheduler can select
from a higher number of available warps, the available shared memory bandwidth for
each threads is considerably improved and more data can be loaded into memory.
Table 1 depicts the main technical advantages offered by the Pascal GP100 architecture,
implemented in the Tesla P100, when compared to the previous Maxwell GM200,
implemented in the Tesla M40 and the Kepler GK110, implemented in the Tesla K40
graphics card.
Table 1. A comparison between the technical features of the latest CUDA architectures1

1
The table has been created according to the official Nvidia documentation site
https://devblogs.nvidia.com/parallelforall/inside-pascal/ , accessed on 09.30.2016, at 17:55

114
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In contrast with the Kepler architecture, the GP100 streaming multiprocessor provides a
plainer data path structure that can handle more efficiently the data transfers. The Pascal
architecture offers the possibility to overlap load and store instructions far better than
before, along with an improved scheduling process, thus delivering an increased level of
performance with lower power consumption. There is one warp scheduler per processing
block that can dispatch two warp instructions per clock [15]. A novel addition to the
Pascal GP100’s Compute Unified Device Architecture cores is the facility of processing
instructions and data that have a precision of both 16-bit and 32-bit, while the amount of
FP16 processed operations is up to two times the throughput of FP32 operations.

5. THE KERNEL FUSION TECHNIQUE

When multiple kernels are running sequentially, it can often happen that one can identify
certain elements within the kernels that can be joined together (fused). The programmer
must be very thorough when applying this technique because the process of creating two
kernels in series produces an implied synchronization between the two CUDA functions.
It is a common practice, when one develops CUDA kernel functions, to divide the needed
operations into several phases or steps. For instance, during the first step, one can
compute the results associated with the entire dataset and during the second step the data
can be filtered according to specific criteria.
If the second step takes place inside a block of threads, the first phase along with the
second one can be joined together within the same kernel function. By doing so, the
supplementary resources, necessary to invoke the second kernel, are removed along with
the writing instructions of the first kernel into the global memory and the succeeding
reading instructions of the second kernel. The first kernel function has the possibility to
store its results using the shared memory and use them in the second step of the
processing, thus eliminating completely the need to access global memory.
The operations that perform reductions can take advantage from this technique because
the results produced by the subsequent steps are typically less than they were during the
first step, as a consequence the consumption of memory bandwidth is substantially
reduced. The rationale behind this technique’s success lies in the ability of data reuse. The
fetching operations applied to global memory are very slow, on the average of 500 clock
cycles. This is the reason why it is better to read bigger segments of data and store them in
memory, preferably into faster memory types. In the Compute Unified Device
Architecture it is better to retrieve the data in segments of up to 16 bytes per thread and
not to use words or single bytes. After every thread has processed without errors one
element, it is best to shift processing to two elements per thread. From this point forwards,
one can experiment and analyze the obtained results when processing four or even six
elements per thread. As soon as the desired data has been obtained, one must try to store it
into the faster memory types, like the shared or register memory and re-call it as often as
it must.

115
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. CONCLUSIONS

The management of the Graphics Processing Unit memory is essential in order to obtain a
high level of performance when developing a CUDA application. The developer must
take into account technical aspects that have a significant impact on the overall
performance, such as: an analysis of the cache mechanism as to benefit from the increased
speedup provided by the shared and cache memory; aligning the data in accordance to the
cache memory supported by the targeted architecture; facilitate the retrieving of
instructions through optimal memory access patterns; obtaining an optimal memory
occupancy in accordance with the processed data types; a thorough understanding of the
performance limiting factors and their correction; an optimal handling of the memory
according to the processing architectures and their technical characteristics, using
optimization techniques such as merging more kernel functions into a single one.

REFERENCES

[1] Petroşanu Dana-Mihaela, Pîrjan Alexandru, Economic considerations regarding the


opportunity of optimizing data processing using graphics processing units, JISOM,
Vol. 6, Nr. 1/2012, pp. 204-215, ISSN 1843-4711.
[2] Pîrjan Alexandru, Optimization Techniques for Data Sorting Algorithms, The 22nd
International DAAAM Symposium, Annals of DAAAM for 2011 & Proceedings of
the 22nd International DAAAM Symposium, Vienna, 2011, pp. 1065-1066, ISSN
1726-9679, ISBN 978-3-901509-73-5.
[3] Lungu Ion, Pîrjan Alexandru, Petroşanu Dana-Mihaela, Optimizing the
Computation of Eigenvalues Using Graphics Processing Units, Scientific Bulletin,
Series A, Applied Mathematics and Physics, Vol. 74, Number 3/2012, pp.21-36,
ISSN 1223-7027.
[4] Pîrjan Alexandru, Petroşanu Dana-Mihaela, Dematerialized Monies – New Means
of Payment, Romanian Economic and Business Review, Vol. 3 Nr. 2/2008, pp. 37-
48, ISSN 1842 – 2497.
[5] Pîrjan Alexandru, Petroşanu Dana-Mihaela, A Comparison of the Most Popular
Electronic Micropayment Systems, Romanian Economic and Business Review, Vol.
3, Nr. 4/2008, pp. 97-110, ISSN 1842–2497.
[6] Tăbuşcă Alexandru, Established ways to attack even the best encryption algorithm,
JISOM, Vol., No.2.1/2011 – December 2011, pages 485-491, ISSN 1843-4711.
[7] Tăbuşcă Alexandru, HTML5 - A new hope and a dream, JISOM, May2013, Vol. 7
Issue 1, p49, ISSN 1843-4711.
[8] Pîrjan Alexandru, Petroşanu Dana-Mihaela, Solutions for developing and extending
rich graphical user interfaces for Office applications, JISOM, Vol. 9, Nr. 1/2015,
pp. 157-167, ISSN 1843-4711.

116
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[9] Lungu Ion, Căruţaşu George, Pîrjan Alexandru, Oprea Simona-Vasilica, Bâra
Adela, A Two-step Forecasting Solution and Upscaling Technique for Small Size
Wind Farms located in Hilly Areas of Romania, Studies in Informatics and Control,
Vol. 25, No. 1/2016, pp. 77-86, ISSN 1220-1766.
[10] Lungu Ion, Bâra Adela, Căruţaşu George, Pîrjan Alexandru, Oprea Simona-
Vasilica, Prediction intelligent system in the field of renewable energies through
neural networks, Journal of Economic Computation and Economic Cybernetics
Studies and Research, Vol. 50, No. 1/2016, pp. 85-102, ISSN online 1842– 3264,
ISSN print 0424 – 267X.
[11] Altar Samuel. A., Costin A., Enache D., RDF & RDF Query Languages - Building
blocks for the semantic web, JISOM, CNCSIS B+, vol. 9, nr.l, 2015, ISSN 1843-4711.
[12] Cook Shane, CUDA Programming, 1st Edition, A Developer's Guide to Parallel
Computing with GPUs, Morgan Kaufmann, 2012, ISBN 9780124159334.
[13] Sanders J., Kandrot E., CUDA by Example: An Introduction to General-Purpose
GPU Programming, Addison-Wesley Professional, 2010, ISBN-10 0-13-138768-5.
[14] Schneider S., Yeom, J, Nikolopoulos D., Programming multiprocessors with
explicitly managed memory hierarchies, in IEEE Computer Society, Volume 45 ,
Issue 5, October 2009, ISSN 0018-9162.
[15] Whitepaper, NVIDIA Tesla P100 The Most Advanced Datacenter Accelerator Ever
Built Featuring Pascal GP100, the World’s Fastest GPU

117
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

TEACHING SOFTWARE PROJECT MANAGEMENT: THE COLLABORATIVE


VERSUS INDEPENDENT APPROACH

Adelina-Gabriela Chelcea 1*
Alex-Catalin Filiuta 2
Costin-Anton Boiangiu 3

ABSTRACT

The purpose of this scientific article is to outline the human’s desire and capability of
working independently or as part of a team. Each way of working has its advantages and
disadvantages. The question is: which one makes us happier? Depending on everyone’s
character, the professional objective can be obtained through working independently or
collaborating with other people.

KEYWORDS: Independent Work, Collaborative Work, Team Worker, Software Project


Management, Teaching Strategies

INTRODUCTION

There are moments in our life when we have to integrate ourselves in a team or when we
need to work independently in order to achieve our goals and get satisfied with the efforts
we made.
Being an independent person means becoming self-aware, self-monitoring and self-
correcting. You have to know what you need to do and how to take the initiative rather
than waiting to be assigned any tasks. Doing what is needed to the best of your ability,
without the need for external prodding, and working until the job is completely done can
be a difficult activity. It is necessary to learn to work at a pace that you can sustain, to
accept your mistakes without looking for excuses and to refuse to let self-doubt or
negative emotions due to negative past experiences change your path.
“Coming together is a beginning. Keeping together is progress. Working together is
success.” — Henry Ford
As Henry Ford states, a more difficult process can appear while founding a team due to
the members’ personality and capability of working with other people. While working in
a collaborative environment, a collective commitment to a common mission and a shared
effort to get the desired results is required.

1
* corresponding author, Student, Politehnica University of Bucharest, 060042 Bucharest, Romania,
adelina.chelcea@gmail.com
2
Student, Politehnica University of Bucharest, 060042 Bucharest, Romania, alex.filiuta@gmail.com
3
Professor PhD Eng, Politehnica University of Bucharest, 060042 Bucharest, Romania.,
costin.boiangiu@cs.pub.ro

118
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

One of the most important moments when we have to join a team is after university
graduation when everybody needs to get a job. Ensuring that students are ready to start
working in a professional environment is the main objective of any university. However,
most of the projects that a student is required to work on demand individual effort,
whereas more and more companies assemble groups of employees with great synergy and
put a strong focus on terms like “teamwork”, “team building” and are usually looking for
people that are comfortable working in a team.
So what then should universities be teaching us? Apart from the obvious technical skills
that any graduate is expected to have, students should be given the chance to exercise and
improve their social skills and learn how to be an effective group member.
T. Panitz states: [1] “Collaboration is a philosophy of interaction and personal lifestyle
where individuals are responsible for their actions, including learning and respect the
abilities and contributions of their peers;”
An idea that can be drawn from the above statement is the fact that one’s responsibility to
the team also increases his motivation, which can play a key role towards a better
understanding of the concepts required for that specific project and the subject itself.
Getting accustomed to the various ways of thinking of each individual in the team also
improves the thought process and social skills.
Johnson and Johnson (1986) [2] stated that students working teams tend to achieve at
truly higher levels of thought and also retain information for more time than those who
work individually. The shared learning gives students an opportunity to engage in
discussion, take responsibility for their own learning, and yet become critical thinkers.
(Totten, Sills, Digby and Russ, 1991) [3]

PREVIOUS WORK

A research study was conducted by EJ Bryson that would answer the following question:
“Will allowing students to work in groups improve their understanding, or will working
individually lead to greater understanding?” [4].
At the start of each lesson, the class of 28 7th grade math students would be divided as
follows: “Half of the class was instructed that they would complete their work for this unit
by working in groups; the other half of the class would complete their work by
themselves. The students were randomly assigned to work either individually or in groups
using Random Sequence Generator.”
Students also received a pre-test before the lessons started, and a post-test at the end of the
unit aiming to determine whether they had progressed or not. The pre-test contained two
questions, and the post-test three, two of which were the same as in the pre-test.
The study delivered no conclusive information on whether it would be better for students
to work individually or in groups mainly because the average grades on the pre-test were
higher than anticipated and no improvement could be observed.
An observation regarding the methodology could be made. Randomly choosing teams and
disbanding them after such short periods of time might not be the best idea, as each team
has its’ own dynamic based on the members it’s made of. Team members may require a

119
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

short period of time to figure out how they can work best together and to adjust to each
other. Rapidly changing the composition of the teams might not lead to any kind of
progress.
Another study, made by A. Gokhale [5], aimed to evaluate the effectiveness of individual
learning versus collaborative learning in evolving critical thinking talents. One of the
main research questions examined in this study was: “Will there be a significant
difference in achievement on a test comprised of "critical thinking" items between
students learning individually and students learning collaboratively?”
A total of 48 students participated in this study, which consisted of two parts: lecture and
worksheet. For the worksheet part, the students were divided into two sections: one in
which everybody worked individually and one in which students worked in groups of
four. After both parts had finished, students were tested over the studied material. The
posttest consisted of 15 “critical-thinking” items.
After evaluating the posttests, it was clear that students who worked collaboratively
performed better than those who worked individually, with a significant difference. The
mean of the posttest grades for the students in the collaborative section was 12.21, higher
than the mean of their counterparts (8.63).
Vygotsky’s beliefs [6] were that students can achieve a higher understanding through
collaborative work, by learning from one another. Different sets of skills and ways of
thinking improve the learning process of everyone involved.

PROPOSED APPROACH

Moving on from the student days when everyone adopted his way of learning, we get a
job where we are confronting the problem of working independently or being part of a
team. Most of the people want to work independently, having their own business being
the main purpose. However, the first job is preponderant in a team, the collaborative
working approach being the one desired in the most of the companies.
Considering the fact that a student is used to work independently, but companies are
looking for employees with the ability of a team worker, we want to emphasize the
students’ openness towards teamwork and the employees’ opinion about what this
environment means and what level of satisfaction is reached.
Therefore, we created a survey which can help us to see the differences between what a
student is expecting for and what he gets as an employee. Both students and employees in
the field of IT, especially programmers, responded to this questionnaire.
The 15 questions mainly focused on the background of the participants, what kind of
projects they worked on so far as students or employees, and their preferences regarding
the amount of social interaction they wish to have in a work environment.

STUDY RESULTS
This chapter is presenting the results of our study regarding the people’s belief of working
independently or in a team. Approximately 100 persons responded to our survey, a third
of them being represented by employees, and the remaining by students.

120
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1.1. Students’ answers to the question: “Do you prefer to work independently or in a team?”

Figure 1.2. Employees’ answer to the question: “Do you prefer to work independently or in a

team?”

Figure 2.1. Students’ answers to the question: “Are you a self-motivated person?”

121
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 2.1. Employees’ answers to the question: “Are you a self-motivated person?”

Figure 3.1. Students’ answers to the question: “How do you feel about listening to and respecting
other people’s ideas?”

Figure 3.2. Employees’ answers to the question: “How do you feel about listening to and
respecting other people’s ideas?”

122
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 4.1. Students’ answers to the question: “Would you be willing to take the risk of being stuck
on an issue rather than working with other people that might be able to solve it?”

Figure 4.2. Employees’ answers to the question: “Would you be willing to take the risk of being
stuck on an issue rather than working with other people that might be able to solve it?”

Figure 5.1. Students’ answers to the question: “Do you think that you can get a better hold on a
problem through sharing knowledge and abilities that you can individually?”

123
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 6.1. Students’ answers to the question: “Would you feel stressed about not having someone
to cover for you when you take time off?”

Figure 6.2. Employees’ answers to the question: “Would you feel stressed about not having
someone to cover for you when you take time off?”

The outcomes have been presented as a comparison between students and employees
opinion.
As we can see in Figure1.1 and Figure 1.2, the tendency of students to work
independently is bigger than the one of the employees. 21.3% students comparing to 10%
employees want to work independently and only 47.5% students comparing to 63.3%
employees want to be part of a team. These results can outline the way students were
accustomed to work before getting a job.
According to the percentages from Figure 2.1 and Figure 2.2, we can conclude that students
seem to be more self-motivated persons. This is the result of working on their own small
projects. As an employee, your work is just a part of an entire project, which can reduce
your motivation and even your satisfaction if you are expecting for visible results.
In Figure 3.1 and Figure 3.2 it is pointed out the feeling that a person can have while
listening and respecting others’ ideas. As a student, you are used to implement your own
ideas, but in a team, another person’s ideas can be better and more appropriate for the

124
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

current project. Listening and respecting others’ ideas is learned while working in a team
as we can see in the above diagrams.
Figure 4.1 and Figure 4.2 reveal the fact that students are more willing to take chances.
Due to the fact that as a student you are working only on small projects, you are inclined
to say that every problem has a solution. This happens because you can think about the
problems that can interfere. In a bigger project, you cannot definitely predict the project
complexity, thus the employees would prefer to work with many people who can come up
with an idea when they are stuck.
Even though a lot of students are not used to listen and respect other ideas, they admit that a
problem can be more easily solved if many people are sharing their knowledge. (Figure 5.1)
As we can see in Figure 6.1 and Figure 6.2, both employees and students take into
consideration the fact that an independent work can produce more stress than working in a
team.

CONCLUSIONS

According to our research, the biggest problem that students face is the inconsistency
between the college approach which is focused on independent work and learning and the
market demand which requires collaborative working.
Both independent and collaborative working have their advantages and disadvantages.
The independent work offers autonomy and develops an entrepreneurial spirit, promotes
creativity and sometimes gives you the opportunity to have a deeper knowledge of a
specific subject. The downsides are represented by the lack of support, this approach
putting you in the position of being on your own, the possibility of becoming
overwhelmed with the amount of work and even the isolation.
The collaborative approach gives the possibility of interacting with other people, sharing
ideas, splitting task, thus the efforts are decreasing. On the other hand, you may not be
able to handle your favorite parts of the process and your initiative may not be
acknowledged.
Depending on everyone’s personality, one or another can be more appropriate. Until you
deal with both working approaches, you cannot say which one gives you more satisfaction
and makes you happier.

FUTURE WORK

Our study methodology could be subjected to several improvements in future iterations.


First of all, it might be beneficial if the survey would be conducted over a longer period of
time, aiming to gather as many submissions as possible. The form should also include
questions that would grant the participants the ability to motivate why they answered in a
certain way on some questions (e.g. Do you prefer to work independently or to be a part
of a team? Why? How do you feel about listening to and respecting others’ ideas? Why?).

125
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

One interesting path this study could follow is finding out the impact that the cultural
environment has on individuals in regard to the process of learning and what teaching
approach might best suit them, by reaching out to students and employees in different
countries.
Another future direction will be to enhance the software project management teaching
approach using voting-based methods like the ones described in [15][16][17][18],
blending the collaborative and independent approaches to achieve the best possible
impact.

REFERENCES

[1] Panitz, Theodore. (1999). Collaborative versus Cooperative Learning: A


Comparison of the Two Concepts Which Will Help Us Understand the Underlying
Nature of Interactive Learning
[2] Johnson, R. T., and Johnson, D. W. (1986): Action research: Cooperative learning
in the science classroom. Science and Children
[3] Totten, S., Sills, T., Digby, A., and Russ, P. (1991): Cooperative Learning: A Guide
to Research. New York, Garland
[4] Eilisha Joy Bryson. (2007). Effectiveness of Working Individually Versus
Cooperative Groups: A Classroom-Based Research Project
[5] Anuradha A. Gokhale. (1995). Collaborative Learning Enhances Critical Thinking.
Available: https://scholar.lib.vt.edu/ejournals/JTE/v7n1/gokhale.jte-v7n1.html
[Accessed February 2017]
[6] L.S. Vygotsky. (1978). Mind in Society: Development of Higher Psychological
Processes
[7] Costin-Anton Boiangiu, Alexandru Constantin, Diana Deliu, Alina Mirion, Adrian
Firculescu, "Balancing Competition and Collaboration in a Mixed Learning
Method", International Journal of Education and Information Technologies, ISSN:
2074-1316, Volume 10, 2016, pp. 51-57
[8] Costin-Anton Boiangiu, Adrian Firculescu, Nicolae Cretu, "Combining
Independence and Cooperation as One Anarchic-Style Learning Method",
International Journal of Systems Applications, Engineering & Development, ISSN:
2074-1308, Volume 10, 2016, pp. 97-105
[9] Costin Anton Boiangiu, Adrian Firculescu, Ion Bucur, “Teaching Software Project
Management: The Independent Approach”, The Proceedings of Journal ISOM, Vol.
10 No. 1 / May 2016 (Journal of Information Systems, Operations Management),
pp 11-28, ISSN 1843-4711
[10] Costin Anton Boiangiu, Adrian Firculescu, “Teaching Software Project
Management: The Competitive Approach”, The Proceedings of Journal ISOM, Vol.
10 No. 1 / May 2016 (Journal of Information Systems, Operations Management),
pp 45-50, ISSN 1843-4711

126
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[11] Costin Anton Boiangiu, Ion Bucur, “Teaching Software Project Management: The
Collaborative Approach”, The Proceedings of Journal ISOM, Vol. 10 No. 1 / May
2016 (Journal of Information Systems, Operations Management), pp 134-140,
ISSN 1843-4711
[12] Adrian Firculescu, Ion Bucur, “Teaching Software Project Management: The
Anarchic Approach”, The Proceedings of Journal ISOM, Vol. 10 No. 1 / May 2016
(Journal of Information Systems, Operations Management), pp 92-98, ISSN 1843-
4711
[13] Adrian Firculescu, “Teaching Software Project Management: The Mixed
Collaborative-Competitive Approach”, The Proceedings of Journal ISOM, Vol. 10
No. 1 / May 2016 (Journal of Information Systems, Operations Management), pp
168-174, ISSN 1843-4711.
[14] Gabriela Bajenaru, Ileana Vucicovici, Horea Caramizaru, Gabriel Ionescu, Costin-
Anton Boiangiu, "Educational Robots", The Proceedings of Journal ISOM Vol. 9
No. 2 / December 2015 (Journal of Information Systems, Operations Management),
pp. 430-448, ISSN 1843-4711
[15] Costin-Anton Boiangiu, Radu Ioanitescu, Razvan-Costin Dragomir, “Voting-Based
OCR System”, The Proceedings of Journal ISOM, Vol. 10 No. 2 / December 2016
(Journal of Information Systems, Operations Management), pp 470-486, ISSN
1843-4711
[16] Costin-Anton Boiangiu, Mihai Simion, Vlad Lionte, Zaharescu Mihai – “Voting
Based Image Binarization” -, The Proceedings of Journal ISOM Vol. 8 No. 2 /
December 2014 (Journal of Information Systems, Operations Management), pp.
343-351, ISSN 1843-4711
[17] Costin-Anton Boiangiu, Paul Boglis, Georgiana Simion, Radu Ioanitescu, "Voting-
Based Layout Analysis", The Proceedings of Journal ISOM Vol. 8 No. 1 / June
2014 (Journal of Information Systems, Operations Management), pp. 39-47, ISSN
1843-4711
[18] Costin-Anton Boiangiu, Radu Ioanitescu, “Voting-Based Image Segmentation”, The
Proceedings of Journal ISOM Vol. 7 No. 2 / December 2013 (Journal of
Information Systems, Operations Management), pp. 211-220, ISSN 1843-4711.

127
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

PERFORMANCE MEASUREMENT OF AN ENTITY


FROM THE PERSPECTIVE OF FINANCIAL STATEMENTS

Marilena-Roxana Zuca 1*

ABSTRACT

Financial statements constitute the support information to conduct financial analysis


focused towards formulating a financial diagnosis. Different categories of users apply
financial information and other information provided by accounting, statistical and
operational entity, to diagnose their work, and on this basis to be able to substantiate the
economic and financial decisions. Each economic entity is required to prepare annual
financial statements, which must include: balance sheet; profit and loss account;
statement of changes in equity; cash flows; notes to the annual financial statements.

JEL CLASSIFICATION CODE: M41


KEYWORDS: Financial Statements, Performance, Financial Position, The Fair Value
of Cash Flows, Users, Decision.

1. OBJECTIVE OF FINANCIAL STATEMENTS

Financial statements present the financial results of an economic entity according to


information categorized by economic characteristics known as "structures of financial
statements" that are directly related to the measurement of financial position (assets,
liabilities and equity) and performance (revenue and expenditure).
Financial statements provide information relating to: the company's financial position; its
performance; cash flows. This information becomes useful for various users in decision
making relating to: purchase or sale of shareholdings in the company concerned;
assignment or replacement of individuals in governing bodies.
Application of fair value in accounting appears to allow preparation of financial
statements that give better information to third parties on the performance of current and
future requirements, and therefore the opportunity to substantiate their decisions. Such an
assertion makes us ask the question of the relationship between accounting results in the
broad sense of the term and the exchange value of the entity. On the background of the
exponential growth of financial markets, some experts ask themselves the following
question: Is recognizing balance sheets and past results likely to generate information on

1
* corresponding author, Senior lecturer, Ph.D, Romanian-American University, Bd. Expoziţiei 1B, Sector 1,
Bucharest, Romania, zuca.marilena.roxana@profesor.rau.ro

128
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

the evolution of future results?1 The authors examine the link between the accounting
result and the stock market and are interested in accounting valuations. Edwards-Bell-
Ohlson model provides a framework for appropriate reflection to the extent that the value
of a company depends on its book value and updated abnormal results. However, a
question remains: Why some companies are negotiated at prices higher (or lower) to book
value?
Despite criticism, the financial information contains all the richest information, made
available to investors. The developments of facilitated financial theory provide a
framework enabling the analysis of the importance of his contribution to markets.
Academic research has been stimulated by two major contributions in accounting:
efficient markets hypothesis, formulated by E.Fama and equilibrium model of financial
assets (Capital Asset Pricing Model: CAPM) developed by Sharpe Lintner in 1965 and
1964. By providing a valuation model of efficiency, researchers were able to measure
reaction rates and highlight abnormal returns, at various announcements. Feltham-Ohlson
model suggested by Gerard Desmuliers and Michel Levasseur and Philippe Dessertine
developed in 2001, can be considered a kind of fulfilment of implications of research
evaluation in the fair value estimation performance. Through it the action depends on the
value of equity, of a multiple of abnormal operating profits, on the adjustment involved
on the application of the precautionary principle and the effect of other information.
Model validation is achieved through a number of studies, showing interest in traditional
financial statements, which are not a priori to judgment interest and eventual superiority
of the financial statements evaluated in whole or in part, in fair value.
Annual financial statements must give a true and fair view of the financial position,
performance, changes in equity and cash flows of the entity for the financial year. They
must satisfy the common needs of users. However, experts in accounting found that they
do not provide all the information that users need regarding decisions, because "to a great
extent" financial statements reveal past effects and do not typically provide nonfinancial
information. Also, based on financial statements, the results of the management
administration can be evaluated, including how to manage the resources entrusted to
managers. Those users who wish to evaluate the administration of the entity or manager's
responsibility proceed to establish economic decisions that focus on either option "to keep
or sell" in that entity or to replace or reconfirm its leadership.
At the same time, the financial statements represent the support information needed to
conduct financial analysis focused towards formulating a financial diagnosis. Different
categories of users apply financial information and other information provided by
accounting, statistical and operational entity, to diagnose its work, and on this basis is able
to substantiate the economic and financial decisions.
The multitude of economic consequences of providing information through synthetic
documents (distribution of wealth among individuals, the aggregate level of risk and its
allocation among individuals, allocation of resources between companies, the resources
used for the production, certification, disclosure, analysis and interpretation of
1
Nihat Aktas, Eric de Bodt, Michel Levasseur et al., The Emerging Role of the European Commission in
Merger and Acquisition Monitoring: The Boeing-McDonnell Douglas Case, (2001), pagines 447-480,
European Financial Management 7 (4).

129
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

information etc.) lead to the need to choose the pattern of synthetic documents.1 On what
is considered an optimal set of synthetic documents there is no agreement, it is accepted
only the fact that the disclosure policy through synthetic documents is the result of
combined action of accounting normalization and such factors as: size of the entity,
number of shareholders, stock market listing, the performance of the costs associated with
the disclosure of information. We believe that the action of these factors, dependent on
the characteristics of the economic, political, legal and cultural environment, influence
managerial decision of compilation and publication of synthetic documents. As a result,
between financial statements prepared in different countries there may be significant
differences concerning not only the form and content, but also the objectives assigned to
them. To perform adequately, their respective objectives, synthetic documents must be
well structured, readable, concise and creative, because only so can play an important role
in the decisions of the various groups: based on them the financial analyst will try to
forecast profits, the investor will want to choose an investment a banker will study a loan
application etc. Therefore, the most direct relationship between the entity accounting and
management , regardless of the organization legal form is given by the accounting quality
(through synthetic documents) of provider of accurate, consistent, verifiable and relevant
information to timely and effective decision-making, accountants and users of tomorrow
must better understand the relationship between accounting and other disciplines, the area
of their responsibility, and the need for continuous improvement.

2. BALANCE SHEET IN FAIR VALUE AND FINANCIAL PERFORMANCE

Taken as a financial situation that renders equity through the difference between assets
and liabilities, it is considered that the balance sheet provides information on the nature
and amounts invested in the resources (assets) of the entity, opposite its obligations to
creditors and the owners of these resources.
The international accounting referential analyses the balance sheet in the context of the
conceptual framework and IAS 1 "Presentation of Financial Statements" in its revised
form. International conceptual framework defines and characterizes the elements that
describe the financial position of an economic entity, as: assets, liabilities and equity.

¾ For assets
Intangible assets according to international standard IAS 38 "Intangible Assets" are
identifiable non-monetary asset without physical substance.
Tangible assets according to international standard IAS 16 "Property, Plant and
Equipment" are tangible assets held by an entity, either for use in the production of goods
or services or for rental to others or for administrative purposes.
An item of intangible or tangible assets (as well as any active element in theory) must be
counted as an asset when: it is probable that the economic entity will benefit from future
economic benefits associated with it; and its cost can be determined reliably. In principle,
the recovery value of tangible and intangible assets is made by the depreciation system
1
Malciu L. „ Supply and demand of accounting information”, Publishing House Economică, Bucharest,
1998, pag.39

130
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

according to international standards IAS 38 and IAS 16. Further impairment beyond the
depreciation is made according to IAS 36 "Impairment of Assets”. A distinct category of
non-current assets consists of real estate investment. They are real estate (land and
buildings) held to obtain rents or realization of value pluses. These assets are subject to
IAS 40 "Real estate investment”.
Financial assets are securities and financial receivables (receivables, particularly in the
form of loans). In view of the international accounting referential, financial assets and
short-term financial investments are included in two categories of investments: long-term
investments and current investments. Accounting and evaluation is made in accordance
with IAS 39 "Financial Instruments: Recognition and Measurement”. Basically,
investments are measured at fair value. Impairments recorded during their existence are
recorded in the category of adjustments for impairment.
Current assets are defined in the context of balance sheet presentation according to
international standard IAS 1, using the equivalent term of current assets.
Stocks - according to IAS 2 "Inventories" (Inventories) are completed assets or pending
completed manufacturing and intended to be sold in the normal activity of the entity;
materials and supplies intended to be consumed during the production process or service
provision. This definition does not take into account the nature of the item considered, but
its destination, which is strongly influenced by the activity of the entity holding the assets.
For example, property, land and construction are considered in most entities assets, but
they are stocks for a real estate trader. Evaluated during the financial year to the input cost
level, stocks can be adjusted to the inventory when their possible sales value (net value of
realization) is lower than recorded costs.
Receivables are the rights of the entity from its third parties. The main category of claims
is the commercial claims. To this, the salaries owed, social and fiscal claims; advances to
associates on settlements related to capital; claims against debtors are added etc. When the
inventory value at the end of financial year, is lower to the collection possible nominal
value expressed in current accounts, impairment adjustments are constituted for the
difference.
The available funds (home and bank accounts) are assets with the highest degree of
liquidity. They appear as receivable values, current accounts with banks (with debit
balances), interest receivable, cash and other valuables in cash and letters of credit.
International accounting referential calls these elements liquidity. They are attached to
cash equivalents.
Non-current assets destined to disposal are non-current assets that the entity intends to
abandon by sale or exchange in the next 12 months which follow the end of financial
year. They must be valued at the lowest amount between carrying amount and fair value
minus sales costs.

¾ For debts
International standard IAS 1 defines and presents two major categories of debt, taking
into account their degree of enforceability: total non-current and current liabilities.
Premiums are long term liabilities (maturity more than one year), while others are short

131
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

term liabilities (maturity less than one year). In the current liabilities category is included
the short-term part (payable over the next 12 months) of long-term liabilities. In the
category of non-current liabilities arise individual liability cases resulted from the new
financial mechanism required by applying international standards. So is, for example,
deferred tax entailed by the application of IAS 12 "Income on earnings" and pension
liabilities, following the application of IAS 19 (Employee Benefits). Liability relating to
provisions is the result of applying IAS 37 "Provisions, Contingent Liabilities and any
possible assets" (Provisions, Contingent Liabilities and Contingent Assets). This standard
defines provisions as liabilities whose maturity or amount is uncertain.

¾ equity, as owners residual interest in its assets.


Fair value or historical cost?
If we consider that the balance sheet should be the probable liquidation value of an entity,
certainly fair values would be preferred over historical costs. The reasons behind this
choice are summarized as follows: fair value is the reason behind this choice are
summarized as follows: fair value represent economic reality; a balance sheet on fair
value is a true representation of the financial position; it renders probable liquidation
value while historical cost "leads" to a "fictitious” accounting.
Arguments in favor of fair value would be much stronger if all elements of the assessment
would be based on fair values. Now, however, under international accounting standards
are combined two values: historical cost and fair value. Accounting based on fair value is
further away from the historical perspective and the perspective nearer to the current
value. But fair values increase the risk of misunderstanding by some investors or potential
members of the entity.
Even if fair value can be market value, net value of stock does not equal necessarily, the
market value due to the existence of the entity's internally generated goodwill. In these
circumstances, using the fair value, the value of balance sheet is closer to market value,
but without an equal. This would arise when tangible assets not acquired would be
recognized in the balance sheet.

3. PROFIT AND LOSS ACCOUNT - REFLECTING THE FINANCIAL


PERFORMANCE PANEL

The profit and loss account is the financial position or financial performance that
measures the success of the activity of an economic entity on a given period. Given that
accounting results are the consequence of applying a set of premises and accounting
principles, and above all of the independence of exercises, finding revenue and expenses
in connection to revenues, the importance given to this financial statements is
accompanied by a dose of caution. The profit and loss account provides investors and
creditors information required to forecast values, economic calendar and the entity's
ability to generate cash flows. In this way, investors can more accurately assess the
economic value of the economic entity and creditors can determine the extent to which the
economic entity will be able to repay the debt. So a question arises: How does the profit
and loss account help users forecast cash flows? First of all, this financial situation

132
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

provides information enabling evaluation of past performance of an economic entity, even


if it is a past positive performance is not a guarantee of future success, it allows at least an
update of the most important trends, because when there is a rational correlation between
past performance and future results, the estimate of future cash flows should not be
questioned. On the other hand, the profit and loss account gives users the information
required to measure risk or uncertainty about future cash flows. Because providing
information that explains the elements that lead to benefits - income, expenses, gains and
losses - that financial situation highlights the relationships between the components
described. It can be used by other categories of users. Customers will be informed of the
extent to which the economic entity may provide them with the goods and services
needed. Unions will examine the results in order to negotiate new collective agreements.
In turn, public power uses results information to substantiate its economic and fiscal
policies.
Profit and loss account includes: net turnover, income and expense year, classified by
their nature, as well as the financial result (profit or loss). Unlike the balance sheet, the
result account (profit and loss) translates the entity's business in terms of flow. it records
total credit of goods, services or money falling as income (inflow) and in debit, goods,
services, money coming out as expenses (outflow).
Information about the entity's performance (profit and loss) and especially about its
profitability are used by users to: assess potential changes in the entity's economic
resources which can be controlled in the future, information about variability in
performance are particularly important; predict the entity's ability to generate cash flows
to existing resources; to make judgments about the effectiveness with which the entity
may use new resources.
Profit and loss account presents a double interest, for example, it determines an overall
result and formulates an overall assessment of economic and financial performance of the
period; it lets recap comprehensive income and expense items which contributed to the
identification of favorable and unfavorable factors which influenced it. Thus, decreasing
the exercise result may be generated by an uncontrollable growth of the purchase cost of
raw materials, by increased indebtedness and financial expenses of decelerating sales.
In terms of form, the income statement may be presented differently depending on: the
criterion adopted for classifying income and expenses; how to report income to expenses.
In structuring revenue and expenditure, the continental practice adopted a classification by
the nature of it, while the Anglo-Saxon accounting favors the classification by purpose or
function of the entity.
Regarding the presentation in the form of "account", revenues and expenditures occur in
two separate columns: the balance is entered as appropriate, in the column of income
(loss) or expense (profit).
The "list" presentation allows highlighting revenues, expenses and earnings per operation.
Whatever form of presentation, profit and loss account information are divided into types
of activities that identify with economic and financial transactions carried out by a holder
of heritage during a period of administration.

133
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The 4th Directive suggests four schemes for profit or loss account that may be retained by
Member States (in the form of list and tabular form with the presentation of expenses by
nature and by destination) and there is so far a universal scheme of presentation. These
schemes manifest as options due to the confluence of the differences in the European
accounting and cultural tradition carried by EU countries. For example, Franco-German
accounting tradition expresses its preference for presentation of expenditure and revenue
by their economic nature, while Anglo-Saxon tradition expresses its preference for
presentation of expenditure and revenue after the entity functions.
In the first stage of the Romanian accounting reform, standard setters opted for the
French model of profit and loss account, taking advantage of the relevant financial
information by the state for macroeconomic purposes (particularly regarding the added
value).
In the second stage of the reform of the Romanian accounting a profit and loss account
in French “style" list format (vertical) was chosen with a classification of expenses by
nature, according to the 4th Directive and IFRS. However, the option of Romanian setters
was surprising. The possibility of a profit and loss account with a breakdown of expenses
by the entity functions was not foreseen, although there were ongoing concerns for the
development of the capital market in Romania for attracting foreign investment and
aligning with the international accounting referential.
The profit and loss account in which the classification of revenue and expenditure is
made by nature satisfies the mass of users who seek through accounting information its
use for analysis, forecasting and decision, cannot find in this document other than the
existence or not of profit and its source categorized by nature. Therefore it is necessary to
create other models for the profit and loss account. Why there is no possibility for other
types of model of profit and loss account as long as we are interested in attracting new
investors and developing the Romanian capital market, but also taking into account the
requirements of small and medium enterprises?
Probably the State again puts his "footprint" desiring information converging towards the
calculation of macroeconomic indicators. Structuring profit or loss account would give
users of accounting information more useful and relevant data than the economic nature
of expenditure and revenue.
It can be said that large entities, especially those traded - would opt for structuring on
functions, facets appreciated by investors and managers, thus meeting criteria for
international comparison, precisely as a result of the globalization process and
globalization of financial markets.
In interpreting the performance of an economic entity we often put a sign of equivalence
between performance on the one hand and the result, on the other hand.
Elements of a qualitative nature (ex. level of staff training) that cannot be measured and
accounted for but, have to be taken in to consideration when calculating the performances
of an economic entity are numerous. Naturally the following questions arise: To what
extent can we appreciate as favorable or not, a statement obtained by the economic entity
by taking into account the presentation in absolute value through the balance sheet? What
possibilities provide the balance sheet for correlating the result with other economic and

134
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

financial indicators? For a long time, users of financial statements have been limited to
the profit and loss account to find information on accounting result, considered the most
important indicator for measuring the performance of an entity.
Therefore, we will continue on the result - as an indicator pursued by all users of financial
information. Accounting result was considered a long time the most important indicator
for measuring performance, reflected through the profit and loss account. The result
always takes the form of profit or loss. Thus, in literature several approaches to the
concept of profits have been outlined:
• first approach to this concept derives from etymology of the term "proficere"
which means progress, to give results;
• In another approach, the profit is "the generic name given to positive difference of
income by selling goods made by an economic agent and their cost, regarded as
an expression of economic efficiency”.
Prestigious economists consider profit to be "residual income" like "the final element or
residual difference between total revenue and costs, which is what remains of the
difference after subtracting various amounts ... there are not excluded interpretations,
according to which" many people consider profit as constituting a useless and unjustified
surplus from an economic point of view appropriated by inputs”. Such views are fueled
by inequalities that are formed over time between different social groups based on income
insured from the profit, because large and very large profits exist with little profits or even
with losses.
We believe that a detailed analysis of the performance of an economic entity must take
into account all factors of production that have competed in forming the result in its
economic nature.

4. STATEMENT OF CASH FLOWS - OWN FUNDING CAPACITY

The cash flow statement required by IFRS shows the sources of cash inflows received
by an economic entity during the accounting period, and the purposes for which they were
used. The situation is part of the analysis of an activity as it allows the analyst to
determine the following: the company's ability to generate cash from its activities;
consequences quantified in cash investment and financing decisions; effects of
management decisions on financial policy; the constant ability of firms to generate cash;
how well the operating cash flow is correlated with net revenues; accounting policies
impact on the quality of earnings; information on long-term liquidity and solvency of a
company; whether the presumption of business continuity is or not reasonable; the ability
of firms to finance growth from internally generated funds.
Since the cash inflows and outflows constitute objective information, the data presented in
the cash flow statement are an economic reality. The situation reconciles the increase or
decrease in the cash and cash equivalents of an entity that appear during the accounting
period (objectively verifiable information).
On the other hand, this situation should be read taking into account the following aspects:
There are analysts who believe that accounting rules are created primarily to promote

135
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

comparability, rather than to reflect economic reality. Even if this view was considered a
rough one, it is true that too much flexibility could cause problems to accounting analysts
that are primarily interested in t assessing the future ability of a company to generate
cash from its activities; as the data in the profit and loss account, cash flows can be
unpredictable from one period to another, reflecting random, cyclical and seasonal
transactions, involving cash and sectorial trends. It can be difficult to decipher important
long-term trends, from short-term fluctuations to less meaningful data.
Financial analysts can use the cash flow statement required by IFRS to help them
determine other values they want to use in their analysis for example, free cash flows
which are often used by analysts to determine a company's value. Defining cash flow is
not an easy task as there are many different values which are commonly referred to as free
cash flows.
Free cash flow with discretionary character is the cash available to discretionary
purposes. Under this definition, free cash flow is the cash derived from operating
activities, minus the capital expenditures required to maintain the current level of
activities. Therefore, the analyst must identify the part of capital expenditure included in
cash flows from investments associated with maintaining the current level of activities - a
remarkable task. Any surplus cash flow can be used for discretionary purposes (ex, to pay
dividends and reduce debt, improve solvency or expand and improve the work).
Therefore, IFRS requires separate presentation of expenditure, indicating that those
expenditures were necessary to maintain the current level of activities and of those who
were made to expand or improve the work.
Free cash flows available to shareholders evaluate a firm's ability to pay dividends to its
shareholders. In this case, the total cash used in investing activities (capital expenditures,
acquisitions and long-term investments) is deducted from cash derived from operating
activities. Actually, this definition states that the company must be able to distribute as
dividends the operating cash, left after the company carried out the investments
considered by management to be necessary to maintain and increase the current activities.
Generally for a well-managed and financially sound entity, cash derived from operating
activities is higher than the net income; if not, the analyst must be skeptical about the
company's solvency. Companies that often register growth have negative free cash flows
because their rapid growth requires capital expenditures and other investments with high
values.
Mature companies often have positive free cash flows, while declining firms often have
free cash flows extremely positive, because lack of growth means a low level of capital
expenditure. Therefore, free cash flows with growing high value are not necessarily
positive or negative; they depend much on the life cycle stage of the industry in which a
company operates. That is why free cash flows should be assessed closely with firm-
revenue projections. Many valuation models use operating cash flow, thus stimulating the
management to record inflows from operations (normal and recurring) and outflows
relating either to investments or financing.
Fair value, as defined in International Financial Reporting Standards compared to
historical cost approach, allows an assessment of the financial position as close to reality

136
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

as possible. Accounting regulations on the annual individual and consolidated financial


statements, approved by the Minister of Public Finance, Order no.1.802 / 2014 supports
the model of fair value only in certain situations, namely:
• optional reassessments of tangible assets, reflecting the revaluation results of
operation individual financial statements;
• for financial instruments, including derivatives, fair value measurement is
allowed only in the consolidated financial statements;
• for the goods obtained free of charge or added to inventory as a plus.
Accounting regulations do not require the use of such a model, but rather presents itself as
an alternative rule of evaluation for a relatively narrow range of items. Thus, to comply
with European Directives and International Financial Reporting Standards, the Romanian
accounting accepted as a basic system, the evaluation of historical cost and, as an
alternative valuation rule, the fair value for the items listed above, under the presented
conditions.
In Romanian practice, for many, the concept of fair value knows only one reality: the
market value. But this is only one of the ways to estimate fair value, the one that
provides the greatest objectivity because it is based on information outside the entity
which cannot be influenced in any way. Under current economic conditions, the markets
loses liquidity or ceases to exist, making the fair value measurements based on
information provided by market become irrelevant and insecure.
For our country, the fair value, as all changes in accounting system since 2001 is new. Yet
it is difficult to clarify the conceptual level and the more difficult it is to apply it in
practice. For the first time fair value was mentioned in the accounting regulations
harmonized with international standards in 2001, then in 2002, when it was opted for
connecting the Romanian accounting to international accounting standards and, with the
European ones (who had no previous update in the context of international convergence).
Currently, by waiving the rules mentioned and adopting accounting regulations on the
annual individual and consolidated financial statements, we remain in the spirit of
international accounting standards and therefore we accept fair value as the alternative
value.
The concept of fair value is reflected in several international accounting standards (IAS
16, IAS 19, IAS 36, IAS 38, IAS 39, IAS 40, IAS 41) and the body accounting
regulations IASB does not waive this concept even when developing international
financial reporting standards, using it in the newly issued IFRS (IFRS 2 and IFRS 3). All
these standards require the use of fair values for one or more classes of assets or
liabilities. Currently, the concept of fair value found in the accounting standards
developed by the IASB is: “fair value is the amount for which an asset could be
exchanged, or a liability could be settled, voluntarily between informed parties, during a
transaction in which the price is determined objectively”.
The best impact of fair value over a business performance measurement is offered by IAS
39 "Financial Instruments: Recognition and Measurement”. Certain assets and liabilities
are included in the balance sheet at fair value, at market price. Any changes in the fair
value of recognized assets and liabilities are taken into the results account. This changes

137
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

the nature of profit and loss account and requires rethinking the measurement of
performance, because net income before tax is no longer the economic activity of the
entity, but rather includes the profit generated by the increase in fair value of assets. In
other words, changes in market values will lead to changes in the results reported in the
income statement and in these circumstances it will be difficult to attribute performance to
internal or external changing conditions (market).
If the use of fair values leads to a market price impact on income, and thus on
performance, then the income will become more volatile as a result. Use of fair value
implies a change in direction from loss to profit and a statement of comprehensive income
as developed by the United States. There are two views outlined in this regard, which we
present in the following table:
Table 1.
CONS PROS
■ it is difficult for shareholders to assess ■ acquisition of assets whose value
the effectiveness of resource management decreases is an indication of poor
by the management entity; management that would be recognized in
the income statement.
■ profits arising from changes in fair value
are unrealized and unrealized gains
recognition is inconsistent with the
traditional prudent accounting approach.
Source: own design
Reporting of comprehensive income in financial statements involves the presentation of
all changes in assets and liabilities arising from transactions other than those with
shareholders. This approach presents in one single event, called "statement of
comprehensive income" all items currently recognized in the income statement or the
statement of changes in equity: revaluation of property, plant and equipment, investment
property, goodwill, gain / loss from disposals of assets and net investments, fair value
hedge or of cash flow etc. Currently, the fair value is trying to reduce the negative effects
of historical cost. With the ever stronger development of financial markets, fair value will
have an increasing role.

5. CONCLUSIONS

Economic entities must apply a modern asset valuation or a mix of elements of historical
cost and fair value. This consideration is the basis for increased efficiency of decision
making and improvement of economic and financial performance of economic entities.
Although slow the transition to fair value seems to be an unstoppable trend, because more
and more specialists consider it the best method for valuing assets. This situation is fueled
by pressures of accounting normalization bodies that promote it, via standards they develop.
Discussions on the controversial subject of the use of fair value are far from being
concluded and will continue for a long time and because the concept of fair value is closely
related to the fair presentation, both concepts are still moving, influencing each other.

138
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Without claiming an exhaustive treatment through information and conclusions presented


as a result of the study made, we are opening new opportunities - for new approaches to
the problem stated:
“Is accounting information provided by financial statements sufficient and relevant to
analyzing the transition from historical cost to fair value? Are calculation tools used
appropriately; and the analysis model is now applied correctly so that the results
obtained reflect reality in terms of valuation of assets of an economic entity, based on
information from financial statements? Has the transition from historical cost to fair
value accounting major implications? or otherwise formulated: Can the transition from
historical cost to fair value lead to performance of Romanian economic entities?”

BIBLIOGRAPHY

[1] CECCAR, International Financial Reporting Standards, Publishing House


CECCAR 2015.
[2] Deaconu A., Fair value: accounting concept, Publishing House.Economică,
Bucharest, 2009.
[3] Malciu L., „ Supply and demand of accounting information”, Publishing House
Economică, Bucharest, 1998.
[4] Nihat Aktas, Eric de Bodt, Michel Levasseur et al., The Emerging Role of the
European Commission in Merger and Acquisition Monitoring: The Boeing-
McDonnell Douglas Case, (2001), pagines 447-480, European Financial
Management.
[5] Toma C., Conturile anuale şi imaginea fidelă în contabilitatea românească,
Publishing House “Junimea”, Iaşi, 2001.
[6] Minister of Finance Order no.1802 / 2014 approving the accounting regulations on
the annual individual and consolidated financial statements, Minister Order no.963/
30 dec 2014.
[7] http:/www.fasb.org.
[8] http:/www.iasb.org.

139
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

DESIGNING A FLEXIBLE DOCUMENT IMAGE ANALYSIS SYSTEM:


THE MODULES

Andrei Tigora 1*

ABSTRACT

The project described here represents a flexible and easily extensible OCR system,
designed to be configurable at runtime. The system itself is composed of standalone
binary files than interact with one another using XML files. The system is composed of a
back-end that is represented by binaries, external libraries and a couple of third party
binaries, and a GUI front-end that allows the user to correct imperfections in the results.

KEYWORDS: OCR, Digitization, Document Image Analysis, Document Export,


Retroconversion

1. INTRODUCTION

A modular system as the one proposed in this paper is easy to configure, adding to the
processing flow those components that offer optimal results for the task. Also, if there is
the need to expand the system's functionality, new components can be developed
independently of the existing ones, and without requiring recompiling of the entire
system, whose source code might not be available for some reason or another.
This paper is the result of the 2 year’s work done for the master thesis of the author [30]
and the continuation of the work presented in [32]. Other papers published during this
period, which are connected to the presented subject and can add relevant information,
which was omitted here for the sake of consistency, are [26] and [27]. They go into the
details of preprocessing and contain detailed related work.

2. THE SYSTEM’S MODULES

Grayscale conversion modules

"iterative_recoloring" binary
Conversion from color to grayscale is a lossy process as it reduces a three-dimensional
domain to a single dimension. The loss is an acceptable compromise that ensures
compatibility with software that cannot handle the extra complexity associated with a

1
* corresponding author, Engineer, Politehnica University of Bucharest, 060042 Bucharest, Romania,
andrei.tigora@cti.pub.ro

140
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

three channel input. The main challenge is therefore creating an image that retains the
core features of the original one, without introducing visual abnormalities.
The iterative recoloring begins by computing the luminance of the image using the classic
formula for transforming RGB images. However, as a result of this transformation, the
difference between neighboring pixels has dropped significantly as compared to what it
was when the pixels were colored. Therefore, the algorithm attempts to recover as much
of that initial difference as possible from the original image [3][28][29][31].
At each iteration, pixels influence the associated values of their neighbors, in order to
achieve a difference that is as close to the one in the original color image. When all pixels
have been evaluated, their influences on same value pixels are added up and averaged,
and the pixels have their values changed accordingly. After a preset number of iterations,
or when the image no longer changes significantly, the processing stops, producing a
grayscale image.

Binarization modules

"otsu" binary
The application achieves image binarization using a global threshold value computed
through Otsu's method [20]. Binarization refers to transforming a grayscale image to a
black and white image.
For the grayscale image (usually represented at 8bpp) histogram of pixel values is created.
Next, for each value in the histogram, the histogram is split into two sets of values and the
inter-class variance is computed. The value in the histogram that generated the highest
variance is chosen as threshold.
The final step consists simply of cataloguing the pixels as either foreground or
background based on how they relate to the threshold.

Other binary pixel modules

"deskew" binary
This component detects and corrects the skew of a binary image. The algorithm attempts
to rotate the image for a given set of angles and chooses that value for which the
distribution of the projection of pixel on the vertical axes is most like that of correctly
scanned text. This method is called “Projection Profiling”.

Layout modules

"layout" binary
This component is the most complex one of the entire project. Given that the relations
between the components is fairly complex, a UML diagram is presented below,
summarizing the interactions between the most important classes of the binary and their
operations.

141
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The functioning of this component will be exemplified on the image presented in Figure
2, with following figures outlining the structures that are being constructed as result of the
processing. The image contains a paper fragment, structured into two columns, and using
two distinct fonts, one for the author's own words, the other for citations.

Figure 1. UML class diagram

Figure 2. Fragment of a binarized image

Entity Detection
First of all, the entities of the image are detected by grouping together foreground pixels
(in our case, black) that are adjacent to other similar pixels. Such a grouping, will
eventually be named "entity".

142
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Initially, maximum length horizontal rows of black pixels are identified in the image.
Such a maximum length row is called a segment. As the number of segments is pretty
large, they have to be stored efficiently, using a structure that employs three parameters:
row index of the segment, the index of the starting column and the one of the last column.
Previously identified segments are then grouped based on their adjacency. To limit the
copy operations as much as possible, each segment is encapsulated in a node structure,
which is the base component of a linked list. Apart from storing a reference to the actual
segment, this node also stores a pointer to the head of the list (the head references itself), a
pointer to the next node in the list and the number of elements in the list.
Determining segment adjacency is done by checking all nodes in the list in the order of
the rows. This does not require additional reordering of the segments, as they are already
ordered due to the way they were formed, and the node construction can closely follow
segment creation.
For each node encapsulating a segment on row k, one has to determine those nodes
encapsulating segments on row k-1 that overlap with the current segment on the axis. If
two segments have the same node set as the first element of the list, then the two entities
have to be merged to form a bigger entity. Merging the two entities is a relatively simple
operation, with the smaller list being concatenated to the larger one; this involves
changing the last element of the large list to point to the first element of the smaller one,
modifying the first node reference for all elements in the smaller list and changing the
length of the current list. An example of such an entity is displayed in Figure 3.

Figure 3. Letter "r" and its maximal horizontal pixel rows

By the time the last element has been analyzed, all segments will have been grouped in
entities as shown in Figure 4. Each identified entity has been enclosed by a red axis
aligned rectangle. The process appears to have been successful, yet, on closer inspection,
it becomes clear that this is not the case.

Figure 4. Letter entities before filtering

As seen in Figure 4, there are several problems. Most characters have indeed been
correctly identified, but character fragmentation resulting from the binarization lead to the

143
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

identification of some incorrect entities. In the areas where the character stroke is thinnest,
page deterioration will eventually cause it to fade altogether. When binarization is
applied, these areas look more like background than foreground, therefore they end up
being wrongly classified, which in turn causes character segmentation.
As can be observed, the top part of the letter "n" is very "thin" and for one instance it is
even fragmented resulting in the creation of two entities. The letters "h" and "u" face a
similar problem, due to their similar shape. The "y" character is also fragmented, with the
first entity looking like a "v" and the other like a small dash slightly below it.
Another fragmentation type occurs for the symbols "a" and "8"; for these 2, the bounding
boxes of the fragments are partially or completely overlapping.
A more inconvenient situation is represented by the merging of the "fi" symbols into a
single entity on the first line. This is caused by the fact that too many pixels, the pixels
between the two symbols to be more precise, were classified as foreground. Besides this
"main" merging issues, there is also a problem with the fact that the dot of "i" which is an
entity itself, is overlapping the greater entity formed out of "f" and "i".
A last observation is related to the dots of "i" characters and punctuation marks. Although
they are correctly identified, they may cause problems in the future. They are a lot smaller
than any other neighboring symbol, and are weirdly placed, either completely above the
characters or mostly beneath their level.
Given all the problems detailed above, the necessity of applying some correction filters
becomes obvious [5].
First, an inclusion filter is applied, which checks if an entity is found completely within
the bounding rectangle of another entity. If this is the case, the two entities are merged.
After applying this filter, the fragment looks as in Figure 5. The problem with "a" and "8"
has been fixed, and the "fi" entity and the dot of "i" have also been merged into a single
entity.

Figure 5. Detailed view of the entities after inclusion filter

Next, a concatenation filter checks if an entity can be connected with another one that is
closely placed on the vertical direction. If one of the entities has its horizontal entities
bound by the margins of the other and they are close enough, with the distance being
beneath a specified threshold, then the two entities become one. The results are presented
in Figure 6. This filter solves the problem of the "y" character, uniting the two into a
single entity. The distance between the two was significantly greater than the one between
the entities that form "n", "h" or "u", but concatenating on the horizontal direction is a
much more sensitive problem, because at this point, it cannot be said with any degree of

144
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

certainty that the two entities are indeed part of the same symbol. Concatenating
horizontal entities can be done, but this might require character features knowledge,
which is more appropriate for the OCR phase.
With the entities identified and the corrections applied, they have to be grouped in lines.
Grouping entities into lines of text requires iterating through all entities and assigning
them to one of the lines. Lines are dynamically created, when an entity cannot be assigned
to any other previously created line. Two entities belong to the same line if the y axis
projections of their bounding boxes intersect and the horizontal distance between the two
is beneath a selected threshold. The threshold is chosen in such a way that lines belonging
to adjacent text columns will not be considered as being a single line.

Figure 6. Detailed view of the entities after concatenation filter

Line Detection

Figure 7. Detected text lines in paper fragment

The previously analyzed sample text has been organized in lines, as seen in Figure 7.
Most of the lines are correctly identified, but there are also some exceptions. The first
characters of the text are grouped into their own line, most likely because the algorithm
failed to eliminate the comma, which in turn changed the limits of the bounding box, so
the line could not be evaluated as one. The second problem is related to lines nested
within lines, as is the case of the line of the second column that is composed of the
characters "768" or the lines made of only the dots of "i".

145
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Paragraph Detection
For the last layout analysis step, the lines are grouped into paragraphs. Determining which
lines belong to which paragraph is more than a question of proximity, it is also matter of
the features of the characters that compose the lines.
The features that are tracked are:
• character size
• character boldness / character italics
• line spacing
For a well-defined set of entities, character size detection consists of determining the
point of minimum and maximum in the entity height histogram. Maxima correspond to
lower case letters, upper case letters and potential noise and punctuation marks. If two or
more maxima exist, the largest one corresponds to lower case letters, and the others are
considered to correspond either to uppercase or punctuation marks if they fit within a
predefined interval. If a single maximum exists, the letters are considered to be upper
case, and lower case letters are assigned value 0. For better results, the histogram itself is
preprocessed, by applying a triangle filter whose width is 10% of the histogram width. By
doing so, maxima within a group are greatly outlined.
Determining character boldness is established on a per character basis. The algorithm
determines for each pixel that composes the entity the shortest of the vertical and
horizontal segments to which the current pixel belongs. Once all pixels have been
analyzed, a histogram of the width of the segments is created. The most frequent value
represents the stroke width. If, however, there are two lengths of close values (which
usually happens for differences of a few pixels) an average of these values is computed
and this is used to character boldness.
To determine if a character is italic or not the algorithm evaluates the width of the
bounding box of the entity both in its initial form and following a rotation of the entity.
The pixels are rotated with 16 degrees (trigonometrically), as italic characters are usually
rotated 16 degrees clockwise and the widths for both cases are calculated. The reasoning
behind this is that rotation will produce a symbol with a width smaller that the initial
letter, whereas rotating a regular character will increase the width.
To determine the distance between row entities, only vertically adjacent entities are
evaluated. The algorithm determines if characters are on consecutive lines by comparing
the distance between them with an average of the character heights and checking if their
horizontal coordinates overlap. When the processing is done, a numeric value is
generated, representing the average distance in pixels between rows for the analyzed text
area.
Having all these features determined, the lines are analyzed pairwise. First, the heights of
the lower case letters of the two are computed; if at least one of the lines has 0 sized lower
case letters, then one of the lines contains only uppercase letters, so the algorithm jumps
directly to uppercase analysis; otherwise, a ratio of the two values is computed. A similar
ratio is computed for the uppercase letter sizes and then the two values are averaged.

146
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

If there is a significant difference between the two ratios, there is no need for extra
computation, as the two rows will be considered too dissimilar to belong to the same
paragraph. Otherwise, the algorithm proceeds with comparing character boldness. The
largest valued is kept as reference for both rows and if the difference between the two is
below a specified threshold (1 pixel to be more precise), the processing stops. Otherwise,
character italics are analyzed and if the difference is not too great the lines are classified
as belonging to the same paragraph.
The algorithm successfully separated paragraphs containing italics from those without
italics, also in part due to the significant distance between the paragraphs (see Figure 8).
Also, the paragraphs containing citations were flawlessly identified. For the other type of
paragraphs, there is an error in the first column, splitting the paragraph into two at line 8.
The cause of the problem is that the algorithm is blind to line alignment; instead it only
relies on line distance and character similarity.

Figure 8. Detected paragraphs in paper fragment

Character Recognition modules

"tesseract_wrapper" binary
This binary handles the character recognition for the system. The binary processes the
output of the layout analysis component, extracts the lines of text from the image as
indicated by the XML file, and creates image files which it sends as input to the actual
Tesseract binary. The output is a text file, containing the OCR matching, which is
extracted and added to the output XML file as the contents for the corresponding text line.

Tesseract OCR Overview


Tesseract is an open office OCR engine released under the Apache 2 license. The
application was initially developed as a Hawlett-Packard proprietary software, being
recognized in 1995 as one of the top 3 best applications of this type. After 1995, though,
development on the application stopped, till it was released as open source a decade later.
Starting from 2006, its development is supported by Google [23].

147
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Up until version 2.0, Tesseract only accepted single text column TIFF images. Because
these initial versions did not offer the possibility of taking into account text placement, the
results were usually poor. Starting from version 3.0, Tesseract offers support for
formatting the resulting text, as well as other information related to localization and
layout conformant to the hOCR standard.
Another improvement to the software was the addition of language support. These are
organized as individual files that can be added to the application whenever they are
needed.

Tesseract Architecture
Image processing is done the standard way, sequentially. The first step consists of
detecting the connected component, and retaining the bounding elements of the
components. Based on these, an inclusion hierarchy is created, which, by analyzing the
number of levels, permits a uniform handling of the cases of black text on white
background and white text on black background.
Following this step, the bounding elements are grouped through inclusion into blobs.
These are organized into text lines, which are then split into words, all based on character
spacing. If the spacing is uniform, the splitting is done right away. For more complex
fonts with complex spacing, word splitting relies on using fuzzy spaces [23].
Character recognition is next, which is done in two steps. First, an attempt is made to
identify each individual word. A correctly identified word is used for training an adaptive
classifier, thus permitting the identification of future words. Because following the initial
traversal it might be that useful information were discovered that were not available when
the process began, a new identification is ran, for those words for which the confidence
level is not high enough [23].

Symbol Matching
Tesseract employs a best-first strategy over the segmentation graph for identifying
characters and words, a search that grows exponentially with the size of the blob.
Although this approach works reasonably well for words written with the Latin script
(which are relatively few in number) and for whom the search ends as soon as a valid
entry is found in the dictionary, for texts using Chinese ideograms, the computational
resources are exhausted relatively fast.
To process such a script, it is necessary to limit the segmentation points and establish a
halt condition that is easier to reach. Reducing the segmentation level is done by
considering the symbols as having fixed width, at most some of them being half the width
of a standard symbol.
Another strong constraint is character consistency within a phrase. Introducing more
recognizable symbols extends the capacity of correctly identifying and cataloguing, but
introduces a certain degree of ambiguity. Although one may say which character set is
dominant for a particular text, one cannot eliminate the possibility of having symbols
belonging to a different alphabet appear in the text (e.g. Greek letters for formulas). As a
result, when analyzing the current word, Tesseract will assume it is using a certain

148
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

character set only if more than half of the previous symbols belong to that set. Even in
this conditions, other character sets will not be ignored, as the shape recognition score is
taken into consideration [23].
The features that are tracked during classification are components of the approximation of
the polygonal contour of the character. During the learning phase, a four dimensional
feature vector (position, direction, length) is created out of each element of the polygonal
approximation; these are then clustered to create prototype vectors. In the recognition
phase, the elements of the polygon are split into equally sized fragments, which removes
the length component from the vectors. These smaller fragments are compared to the
elements that compose the prototype; using small components makes the identification
process more reliable to discontinuities related to character representation.
Classifying the symbols is a two-step process; in the first step, the set of possible
characters is reduced to a list of about 1 to 10 characters, using a method similar to
Locality Sensitive Hashing. The second step computes the distance of the inspected
symbol to each of the characters in the list, so that the one with the best score may be
chosen.
The good results of the classification process are due to its structure of selection by
voting. Instead of using a unique bulky classifier, multiple small classifiers are used
instead. To avoid feature explosion in prototypes, Tesseract limits their number to 256,
which is more than enough, including for representing Chinese symbols or various
syllabic systems.

Document generating modules

"pdf_converter" binary
The binary consists of two parts: one that converts the hierarchy analysis XML file (with
possible text contents) into METS/ALTO files, and one that converts these files into the
actual PDF using the third party binary mets2pdf.
For starters, the METS file is created; as the aim is to obtain a pdf file, some of the
information contained in the XML are either not filled in or contain default values. These
data would usually be used for managing a collection of documents and are not relevant to
the PDF file. Moreover, in the context of the current document analysis system, the
METS file is no more than an intermediary product, which loses its relevance once the
final PDF file is generated.
Generating the metsHdr element and its associated subtree is independent of the file (or
files if more files are currently dealt with) that are being analyzed. This section concerns
the METS file itself and will not be part of the final PDF. For example, the end file will
make no use of the creation time of the intermediary file, but this is still filled in
nonetheless.
Usually, the elements of the dmdSec section are more relevant to the final PDF, as they
are used to create navigation labels within the file. However, as the hierarchy analysis tool
is fairly primitive and is having difficulties correctly identifying title, subtitles, chapters
and so on, only two such elements are generated, that have as id "MODSMD_PRINT"

149
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

and "MODSMD_ELEC" respectively, that allow the introduction of a title by hand by the
user; otherwise, the default value is Unknown.
The amdSec elements describe images that compose the document, so the information
associated to these is more sensitive, and should not be left to be default. Apart from the
actual features of the images, this section also contains information regarding the position
within the document.
The next section is called fileSec, and it is composed of two fileGrp sections, under which
files of a particular type are grouped. The first one contains the elements describing
images, and the second one contains ALTO type files.
The last elements necessary to the METS file are two structMap files. The first one of the
two, identified by the "LABEL" attribute "Physical Structure", contains, for this particular
application, the elements that associate image elements with ALTO files with the
corresponding text. The next structMap element, whose attribute "LABEL" is this time
"Logical Structure", describes the hierarchical structure of the document; more exactly,
the identified text is logically organized into articles, chapters, subchapters, titles etc.
However, given that the current analysis is quite lacking, this hierarchy is missing
altogether. Instead, all paragraphs are catalogued as regular text and are associated the
identifier corresponding to the appropriate ALTO file.

Oher modules

"controller" binary
This binary, as its name suggests, controls and coordinates the system based on the XML
configuration file. The application parses the XML and extracts from the DOM tree the
tasks corresponding to each individual component. After the subtree is selected, it is
written to a distinct file and given for processing to the appropriate binary. Once the
processing is done, the XML result file is analyzed, in order to determine if the processing
may carry in or not.

Graphic components

Although editing XML files "by hand" using nothing more than a file editor is not a
complicated task, problems may arise as the user must be knowledgeable enough to create
valid files. Even for experienced users writing such files may seem like a daunting task,
not because of the difficulty, but due to the fact that it is tedious.
As a result, to speed up the interaction with the binaries, several graphic components were
created that allow the user to manipulate XML files without having any knowledge of the
structure of these file and also to run the individual binaries.

Graphic controller
The graphic controller provides the same functionality as the regular controller, but in a
more "user-friendly" manner.

150
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

On startup, the controller parses all xsd files in the configuration folder and accordingly
groups each binary. As mentioned earlier, each processing task may be performed by
multiple binaries, so the user may be able to choose from one of many alternative.
The users begins by choosing a start processing step and an end processing step. Then, for
all the processing steps within, they have to choose an appropriate binary and if necessary
specify appropriate parameters. The parameters are presented in a user friendly manner as
toggles, drop down lists or spinner inputs, as indicated by the type of the components
described in the xsd file.
Once the users are satisfied with their configurations, they may launch the processing.
This notifies the controller to create the actual XML input files, which are then passed on
to the binaries. Just like the regular controller, the graphic controller will check the result
XML files and present a report to the users, indicating whether the processing was
successful or an error occurred.

Correction interface
Regardless of how good a processing component is, errors are bound to appear. And
while deciding on a correct binarization has some degree subjectivity attached to it,
deciding whether two entities are part of the same line is a clear cut procedure with well-
defined metrics. The correction interface allows the user to fix some of the aberrations
that appear as result of the layout analysis or even the OCR processing.

3. CONCLUSION

The presented modules should be regarded as a minimal set that a Document Image
Analysis System may need in order to fully complete the entire execution flow.
Converting a set of digitally-acquired images into a meaningful output format with proper
tagging and ranking is now possible with minimal user interaction and proper
configuration of the aforementioned modules.

4. REFERENCES

[1] How to extend ocropus in c++. http:// code.google.com/ p/ ocropus/ wiki/


CxxProgramming.
[2] The METAe engine. http:// meta-e.aib.uni-linz.ac.at/ metaeengine/ engine.html.
[3] C. A. Boiangiu, A. I. Dvornic. “Methods of Bitonal Image Conversion for Modern
and Classic Documents”. WSEAS Transactions on Computers, Issue 7, Volume 7,
pp. 1081 – 1090, July 2008.
[4] B. S. Almeida, R. D. Lins, and G. D. F. Pereira e Silva. Thanatos: Automatically
retrieving information from death certificates in Brazil. Proceedings of the 2011
Workshop on Historical Document Imaging and Processing, pages 146-153, 2011.
[5] A. Boiangiu, A. C. Spataru, A. I. Dvornic, and C. C. Cananau. Normalized text font
resemblance method aimed at document image page clustering. WSEAS
Transactions on Computers, July 2008.

151
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[6] F. S. C. da Silva C. Antonio Peanho, H. Stagni. Semantic information extraction


from images of complex documents. Applied Intelligence, 37(4), December 2012.
[7] S.H. Kim C. B. Jeong. A document image preprocessing system for keyword
spotting. Proceedings of the 7th international Conference on Digital Libraries:
international collaboration and cross-fertilization, December 2004.
[8] The Association for Automatic Identification and Capture Technologies. Optical
character recognition (OCR). http:// www.aimglobal.org/ technologies/
othertechnologies/ ocr.pdf.
[9] TROY Group. Micr basics handbook. http:// www.troygroup.com/ support/
documents/50-70300-001_CMICRBasicsHandbook_000.pdf, Accessed: October 2004.
[10] P. W. Handel. Statistical machine, June 1933.
[11] R. Holley. How good can it get? Analyzing and improving OCR accuracy in large
scale historic newspaper digitization programs. D-Lib Magazine, 15(3):1-13, 2009.
[12] Phoenix Software International. Optical character recognition - what you need to
know. http://www.phoenixsoftware.com/pdf/ocrdataentry.pdf, Accessed: January 2009.
[13] T. Ishihara, T. Itoko, D. Sato, A. Tzadok, and H. Takagi. Transforming Japanese
archives into accessible digital books categories and subject descriptors.
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries, pages
91-100, 2012.
[14] C. Downton J. He. User-assisted archive document image analysis for digital
library construction. Proceedings of Seventh International Conference on
Document Analysis and Recognition, August 2003.
[15] E. Klijn. The current state-of-art in newspaper digitization. D-Lib Magazine,
January-February 2008.
[16] L. Likforman-Sulem, P. Vaillant, and A. B. de la Jacopière. Automatic name
extraction from degraded document images. Pattern Analysis and Applications, 9(2-
3), October 2006.
[17] R. D. Lins, G. D. F. Pereira e Silva, and A. D. A. Formiga. Enhancing a platform to
process historical documents. Proceedings of the 2011 Workshop on Historical
Document Imaging and Processing, 0:169-176.
[18] X. Lu, S. Kataria, W. J. Brouwer, J. Z. Wang, P. Mitra, and C. Lee Giles. Automated
analysis of images in documents for intelligent document search. International
Journal on Document Analysis and Recognition, 12(2), June 2009.
[19] E. Matthaiou and E. Kavallieratou. An information extraction system from patient
histor-ical documents. Proceedings of the 27th Annual ACM Symposium on
Applied Computing, page 787, 2012.
[20] N. Otsu. A threshold selection method from gray-level histograms. IEEE
Transactions on Systems, Man and Cybernetics, 1979.

152
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[21] R. Sanderson, B. Albritton, R. Schwemmer, and H. Van De Sompel. Sharedcanvas:


A collaborative model for medieval manuscript layout dissemination. Proceedings
of the 11th Annual International ACM/IEEE Joint Conference on Digital Libraries,
pages 175-184, 2011.
[22] United States Postal Service. How a letter travels. http:// about.usps.com/
publications/ pub100/ pub100_078.htm.
[23] R. Smith. An overview of the Tesseract OCR engine. Ninth International
Conference on Document Analysis and Recognition, 2:629-633, 2007.
[24] H. Toselli, E. Vidal, and A. Juan. Interactive layout analysis and transcription
systems for historic handwritten documents categories and subject descriptors.
Proceedings of the 10th ACM Symposium on Document Engineering, pages 219-
222, 2010.
[25] P. Tranouez, S. Nicolas, V. Dovgalecs, A. Burnett, L. Heutte, Y. Liang, and R.
Guest. Docexplore: Overcoming cultural and physical barriers to access ancient
documents. Pro-ceedings of the 2012 ACM symposium on Document engineering,
pages 205-208, 2012.
[26] Tigora, M. Zaharescu, “A Document Image Analysis System for Educational
Purposes”, Journal of Information Systems & Operations Management, 05/2013,
7(1), pp. 116-175.
[27] Tigora, “An Overview of Document Image Analysis System”, Journal of Information
Systems & Operations Management, 12/2013; 7(2), pp. 378-390.
[28] A. Boiangiu, A. V. Stefanescu, “Target Validation and Image Color Calibration”,
International Journal of Circuits, Systems and Signal Processing, Volume 8, 2014,
pp. 195-202.
[29] A. Boiangiu, A. I. Dvornic. “Bitonal Image Creation for Automatic Content
Conversion”. Proceedings of the 9th WSEAS International Conference on
Automation and Information, WSEAS Press, pp. 454 - 459, Bucharest, Romania,
June 24-26, 2008.
[30] Tigora, “Document Image Analysis System”, Master Thesis, Unpublished Work,
Bucharest, Romania, 2013.
[31] A. Boiangiu, I. Bucur, A. Tigora - „The Image Binarization Problem Revisited:
Perspectives and Approaches”, The Proceedings of Journal ISOM Vol. 6 No. 2 /
December 2012, pp. 419-427.
[32] Andrei Tigora - “Designing A Flexible Document Image Analysis System – Part 1:
The Architecture”, The Proceedings of Journal ISOM, Vol. 10 No. 1 / May 2016
(Journal of Information Systems, Operations Management), pp 235-245.

153
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

IMPLEMENTATION SOLUTIONS FOR DEEP LEARNING NEURAL


NETWORKS TARGETING VARIOUS APPLICATION FIELDS

Dana-Mihaela Petroşanu 1*
Alexandru Pîrjan 2

ABSTRACT

In this paper, we tackle important topics related to deep learning neural networks and
their undisputed usefulness in solving a great variety of applications ranging from image
and voice recognition to business related fields where they have the potential to bring
significant financial benefits. The implementations and scale sizes of deep learning neural
networks are influenced by the requirements of the developed artificial intelligence (AI)
applications. We are focusing our research on certain application fields that are most
suitable to benefit from the deep learning optimized implementations. We have analyzed
and compared the most popular deep learning libraries available today. Of particular
interest was to identify and analyze specific features that must be taken into account for in
accordance with the tasks that have to be solved.

KEYWORDS: Deep Learning, Artificial Intelligence, Deep Learning Libraries.

1. INTRODUCTION

The concept of deep learning has existed for a long period of time, being called in
different ways, according to different perspectives and moments in time. In the last years,
the amount of data has increased significantly, more and more data being possible to be
used in the training process of artificial neural networks (ANNs), thus increasing the
usefulness and applicability of the deep learning concept. The evolution of the parallel
hardware architectures has made it possible to develop complex deep learning structures
having huge numbers of processing elements (neurons). Therefore, deep learning models
have evolved along with the computational resources, becoming able to tackle
successfully complex problems with a high level of accuracy.
According to the scientific literature [1], the concept of "deep learning" originated in the
1940s and one can identify three stages in its evolution. In the first stage, during the years
1940s and 1960s, the concept was called "cybernetics" and is marked by the development
of the perceptron concept, that made it possible to train solely a neuron. Afterwards, in the
period 1980-1995, the concept was called "connectionism", being strongly influenced by
the development of the back-propagation technique that enabled the possibility to train

1
* corresponding author, PhD Lecturer Department of Mathematics-Informatics, University Politehnica of
Bucharest, 313, Splaiul Independentei, district 6, code 060042, Bucharest, Romania, danap@mathem.pub.ro
2
PhD Hab. Associate Professor Faculty of Computer Science for Business Management, Romanian-American
University, 1B, Expozitiei Blvd., district 1, code 012101, Bucharest, Romania, alex@pirjan.com

154
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

ANNs having at most two hidden layers. The concept of "deep learning" was coined in
2006, during the third stage [1].
According to [2], the concept of deep learning can be viewed as a family of machine
learning techniques, that exhibit certain common characteristics. Thus, they consist in
several layers of neurons, interlinked in order to identify, extract and process certain
characteristics. The output produced by each intermediary layer is used as an input for the
following one. These techniques can employ either supervised or unsupervised methods.
Common features are identified and are being processed in order to obtain specialized
characteristics, arranged according to their rank. The techniques can achieve more levels
associated with different concepts, related to the abstraction degree.
Interlinking is the main characteristic on which the deep learning concept relies upon. The
whole principle of deep learning lies in the fact that even if a single element (also called a
neuron) does not exhibit intelligent features, several elements interlinked together might
have the capacity to unveil such qualities. Therefore, a key factor in the context of
achieving an intelligent behavior consists in the number of processing elements that
should be extensive [1].
Over the years, it has been proven that the networks' sizes significantly influence the
accuracy of the obtained results and make it possible to approach complex tasks. If we are
to compare the sizes of today's neural networks with the ones from forty years ago, we
would notice that the dimensions of these networks have increased dramatically. In order
to be able to implement huge sized networks, one must have access to significant
computational hardware and software resources.
In the following section, we analyze the most important issues regarding the
implementation of deep learning neural networks.

2. IMPLEMENTATION ASPECTS OF DEEP LEARNING NEURAL NETWORKS

When training deep learning neural networks one can identify several approaches. A
classical approach consists in training the networks, using the Central Processing Unit
(CPU) of only one computer. Nowadays, this way of dealing with the problem does not
provide sufficient computational resources. The modern approach consists in using
multiple processing nodes distributed over an entire network and modern parallel
processing architectures such as those of the graphics processing units (GPUs).
The neural networks have high requirements from the computational point of view and
thus one has to overcome the serious limitations of classical CPUs. There are several
optimization techniques that one can apply in order to improve the performance of central
processing units and benefit at maximum from the architectural features such as: the
multi-threading mechanism; aligning and padding the data as to allow the central
processing unit to retrieve the data optimally; enclosing supplementary data bytes in-
between the existing data structures; devising appropriate floating or fixed type execution
plans; reducing at minimum the expending of memory by sorting in a descending order
according to the width of the elements; devising customized numerical computational
procedures [1].

155
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

A lot of people who carry out scientific activities related to machine learning tend to
overlook the above-mentioned implementation aspects thus risking to become limited in
terms of the maximum number of neurons and of the obtained accuracy. Nowadays GPUs
have become the platform of choice when developing state-of-the-art implementations of
artificial neural networks. Initially, graphics processing units targeted exclusively the high
computational requirements of video gaming applications [3]. The massive processing
power of the GPUs offers great advantages in optimizing applications from various fields
[4] and thus in the development of neural networks.
The type of processing that the GPU performs is relatively straight forward when
compared to the central processing unit's tasks that necessitate frequent shifts of the
execution to different sequences of instructions. One can parallelize without difficulty
most of the processing tasks as the vast majority of the required computations are not
depending on each other. When designing the graphics processing units, the
manufacturers had to take into account the necessity of obtaining a large amount of
parallelism and a high transmission capacity of the data fetched from memory, being
compelled to reduce the GPU clock frequency and its ability to shift frequently the
execution to different sequences of instructions. In order to develop an artificial neural
network, the scientists must use various buffer settings in order to store the network's
parameters, the threshold value at which the activation occurs, taking into account the fact
that all these values must be computed for each new step of the training process [1].
The graphics processing units represent a better solution in implementing deep learning
neural networks, as they offer a higher transmission capacity of the data fetched from
memory, when compared to the one of the central processing units. Furthermore, as in the
development process of an artificial neural network (ANN) there are not required frequent
shifts of the execution to different sequences of instructions, the graphics processing units
can successfully manage the whole execution process, being best suited for this kind of
tasks. The software operations related to the ANNs have the potential to be decomposed
in multiple tasks, that can be managed in parallel, by neurons belonging to the same layer.
Therefore, these operations could be carried out easily as one employs the parallel
processing power of the graphics processing units.
Initially, when the graphics processing units were developed for the first time, their
architecture was restricted exclusively to graphics processing purposes. As the time
passed, the graphics processing units have evolved, offering a greater flexibility towards
specialized functions that could be called for different allocation and transforming
operations. What was interesting is that, for the first time, these operations needn't have to
be solely for graphics processing. Consequently, the graphics processing units have
become a viable tool for performing scientific computations by using its resources that
were initially developed only for graphics rendering.
After the Nvidia company has introduced the Compute Unified Device Architecture
(CUDA), the graphics processing units have evolved and become general-purpose
graphics processing units that were able to execute source-code and not only specialized
functions. In light of the new possibilities that emerged due to the huge parallel
processing power and increased memory bandwidth, the graphics processing units that

156
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

incorporated the CUDA architecture became the platform of choice for scientists that
were prospecting the field of deep learning [1].
In order to harness the full computational potential of a graphics processing unit, the
developers must apply optimization techniques other than those used for central
processing units. In contrast with central processing units, that use the cache mechanism
to improve the overall performance of a source code, the graphics processing units might
perform several operations more times in parallel and obtain an improved performance
when compared to the CPU.
For a programmer to achieve top peak performance on a graphics processing unit, he must
make full use of the Compute Unified Device Architecture threading mechanism and
carefully manage the threads within blocks of threads and the blocks in grids of blocks of
threads. Particular attention must be payed to managing correctly and efficiently the
memory. One must take into account techniques such as memory coalescing but most of
all the features offered by the respective CUDA architecture in order to improve the
software performance of a neural network. An important implementation aspect when
developing a neural network using a CUDA GPU consists in assuring that every thread of
a group of threads processes the same task, in the same time, in parallel.
The graphics processing unit is not suited for frequent shifts of the execution to different
sequences of instructions and this is the reason why one must not completely neglect the
central processing unit, the best performance being obtained when using a hybrid
approach for developing artificial neural networks.
In CUDA, the execution threads within a block of threads are grouped in warps,
containing 32 threads. This dimension represents the smallest amount of data that is
processed by a Compute Unified Device Architecture multiprocessor, according to the
"Single Instruction, Multiple Data" category of the Flynn's taxonomy [5]. Therefore,
during an execution cycle, the threads within the same warp process a single instruction.
If different instructions should be processed by the threads within the same warp, they are
executed sequentially.
Sometimes, in their research, the developers need to check the quality, performance or
reliability of new algorithms or models and in this purpose, they often use software
libraries, developed in various programming languages, containing high performance
software packages, useful for developing their applications. For example, in the Machine
Learning field, the libraries are developed in Python, Java, .NET, C, C++, Lua and other
programming languages [1], [6], [7]. Some of the most popular deep learning libraries are
categorized by their programming language and synthetized in Table 1.

157
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Table 1. Some of the most popular deep learning libraries


The
Deep learning libraries/toolboxes developed in the
No. programming
respective programming language
language
Theano and based on it: Keras, Pylear2, Lasagne, Blocks
Caffe
nolearn
Gensim
Chainer
1 Python deepnet
Hebel
CXXNET
DeepPy
DeepLearning
Neon
Python API over a
2 TensorFlow by Google
C/C++ engine
ConvNet
DeepLearnToolBox
Matlab
3 cuda-convnet
MatConvNet
eblearn
SINGA
C++ NVIDIA DIGITS
4
Intel® Deep Learning Framework
Microsoft Cognitive Toolkit, previously known as CNTK
N-Dimensional Arrays for Java (ND4J)
Deeplearning4j
Java
5 Encog
H20 Web API
6 JavaScript Convnet.js
7 Lua Torch
8 Julia Mocha
9 Lisp Lush (Lisp Universal Shell)
10 Kaskell DNNGraph
11 .NET Accord.NET
darch
12 R
deepnet

The Python programming language was developed by Guido van Rossum and launched in
1991 as a GPP (general-purpose programming) high-level language. There are numerous
frameworks and libraries developed in Python. For example, the Theano library can be
used for processing mathematical operations. This library facilitates the development of

158
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

deep learning algorithms. Theano is the foundation for several other libraries that are
based on it: Keras, Pylear2, Lasagne, Blocks [8].
Keras represents a deep learning neural network library, comprising several modules, that
can be successfully used for processing tensors using either the graphics processing unit
or the central processing one. The Pylear2 library comprises a large set of algorithms that
can be successfully used in the development of deep learning neural networks. The
Lasagne library has been developed in modules, being characterized by easiness,
clearness, having in mind the success of its practical applications. Another developing
framework useful in developing deep learning libraries, based on the Theano library, is
Blocks [9].
One of the most popular framework for developing deep learning neural networks is
Caffe, developed in Python by BVLC (Berkeley Vision and Learning Center) and other
developers of the software community. The primary characteristics based on which this
framework has been designed, consist in an efficient processing, modularization in order
to obtain a state-of-art framework. Using the Caffe framework, Google has developed its
DeepDream project, a library written in the C++ programming language, licensed under
the Berkeley Software Distribution (BSD), having a Python interface [6].
Another deep learning library developed in Python is nolearn, that comprises a set of
machine learning tools and makes use of wrapping and abstraction, targeting already
developed libraries [10]. Gensim is a specially developed toolkit in Python, that exposes
several high-performance algorithms, designed for developing deep learning neural
networks that have to process huge amounts of text data [8].
Chainer, programmed in Python, facilitates the implementation of deep learning neural
networks, offering support for certain algorithms. One prominent characteristic of Chainer
is its flexibility, being easy to use and understand [9]. Another deep learning
implementation achieved using the Python language is deepnet, an implementation that is
based on Graphics Processing Units. Many of the most well-known deep learning
algorithms have been implemented within deepnet on the GPUs in order to facilitate the
development of deep learning neural networks [6].
The Hebel library was developed to facilitate the development of deep learning neural
networks in the Python programming language, using PyCUDA that makes it possible to
benefit from the huge parallel computational power of Graphics Processing Units that
incorporate the Compute Unified Device Architecture (CUDA). It offers support for the
most well-known and efficient kinds of neural networks [10].
CXXNET is a reliable deep learning framework where the processing is spread among
multiple processing nodes. It offers support for the CUDA-C language and incorporates
user-friendly interfaces useful for developing the networks [8]. DeepPy is a framework
for deep learning, under the free software license of the Massachusetts Institute of
Technology (MIT), being based on the Python programming language. The existing
source code can be extended with ease and also offers support for Nvidia Graphics
Processing Units that incorporate CUDA technology [9].
DeepLearnig is a library, that was programmed in C++ and Python and contains high
performance software packages, useful for developing deep learning neural networks [6].

159
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Based on the Intel Nervana Python comes Neon, a framework for creating deep learning
neural networks, having as a main declared goal to attain the best in-class results [10].
The TensorFlow software library is written with a Python API over a C++ engine and was
developed by the Google researchers. This library is useful in carrying out the research
related to deep learning neural networks and many other scientific fields. TensorFlow is
designed to solve problems related to numerical computations through graphs. Within
such a data flow graph, the nodes are the mathematical operations and the edges are the
tensors. The library can be easily implemented on several central processing units or
graphics processing units in different environments through the same Application
Programming Interface (API) [10].
The popular development environment Matlab offers many useful software instruments
for developing and implementing a large variety of neural networks: ConvNet,
DeepLearnToolBox, cuda-convnet, MatConvNet. Convolutional neural networks
(ConvNet) represent powerful deep learning tools for classifying different elements, by
acquiring knowledge on their own from unprocessed data.
Another software instrument for developing deep learning networks is the
DeepLearnToolBox that is not active anymore, being considered obsolete. Still, it
contains different types of deep learning neural networks like deep belief networks and
convolutional networks along with different autoencoder types [10]. Cuda-convnet is
another useful implementation of the feed-forward artificial neural networks developed in
Matlab. It is implemented in C++/CUDA, being a high-performance toolbox. Its training
is based on the backpropagation algorithm.
Another Matlab toolbox is MatConvNet that implements Convolutional Neural Networks
(CNNs) and is useful in applications that require the automated extraction of information
from images. The main properties of this toolbox are its simplicity, efficiency and
capacity of running the most important Convolutional Neural Networks, among which it
is worth mentioning applications for image or text detection, sorting, segmenting,
recognition [6].
There are a lot of deep learning frameworks available that use the C++ programming
language as their basis, for example: eblearn, SINGA, NVIDIA DIGITS, Intel Deep
Learning Framework and Microsoft Cognitive Toolkit, previously known as CNTK.
Eblearn represents a useful C++ library, under an open-source license, developed at the
New York University. It is useful in developing different types of CNNs, providing a
friendly Graphical User Interface [10].
Another deep learning framework is SINGA, developed in 2014 in Singapore, at the
National University. SINGA is supported by an American non-profit corporation, ASF
(Apache Software Foundation). This library is based on decomposing data between the
nodes of a cluster and employs a parallel training process. It is useful in managing the
interactions between computers and human languages (natural language processing) and
supports many deep learning classes of models [6].
Developed by Nvidia, DIGITS (Deep Learning GPU Training System) represents a
powerful tool, useful in training in a reduced time DNNs (deep neural networks) that are
capable to classify images and detect objects. It has as a main advantage the fact that is

160
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

interactive and therefore the researchers can be preoccupied mainly by developing the
neural networks, without having to be concerned about issues regarding writing computer
programs, identifying and removing errors.
Intel Deep Learning Framework (IDLF) represents a SDK (software development kit)
library, providing a software framework that fuses together the Intel platforms and is
useful in training, executing and accelerating DNNs. The main characteristics of IDLF
are: it offers support for a rich variety of accelerators; its development is based on an
optimized code; it supports the development of a wide range of ANNs on the same
platform; it is suitable for cloud computing by devising schemes for allocating the tasks
among different processing nodes; it enables the improvement of the training process
during its execution [6].
Microsoft Cognitive Toolkit, previously known as CNTK is a deep learning library
developed in C++ that offers high accuracy and speed, being compatible with many
common programming languages or algorithms. This toolkit is characterized by a series
of capabilities and features that it offers to the users. It contains components that are able
to manage sparse or dense data from other programming languages, being suitable for
both unsupervised and supervised learning. It also contains components that are able to
handle massive datasets. Microsoft Cognitive Toolkit is characterized by an efficient use
of resources, offering a high level of parallelism on multiple processing units and an
optimized mechanism of memory sharing, useful in managing large models in the
memory of graphics processing units. The toolkit offers a full application programming
interface useful in developing neural networks, evaluating models, ensuring suppleness
and flexibility [10].
A series of deep-learning libraries are developed in the Java general-purpose
programming language, for example: N-Dimensional Arrays for Java (ND4J),
Deeplearning4j, Encog and H20 Web API. The first of these libraries, ND4J, is designed
for the Java virtual machines (JVMs), an abstract machine for automatically performing
computations, that makes it possible for a computer to run Java software instructions.
ND4J is characterized by the fact that it runs fast specific routines, requiring a small
amount of random access memory [11].
The Deeplearning4j deep-learning library is written for Java and Scala programming
languages, under an open-source license, developed mainly in order to be used in
managing the external and internal factors that affect the functions of companies, its
usefulness in research having lesser extent. Encog is another software instrument that has
been evolving since 2008, being useful in developing deep learning networks, supporting
a wide range of learning algorithms and neural networks. It is useful in many scientific
fields, especially in medicine and finance [11].
H2O is another deep-learning library developed in Java, under an open-source license. It
is able to scale to more processing nodes, offering a high-level of performance and access
to complex algorithms that enable programmers to develop powerful applications, using
an intuitive application programming interface. A lot of companies have developed
complex expert systems that help improve their economic activity. H2O is able to store
and process billions of tuples in-memory using a specialized compression algorithm. H2O

161
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

offers access to familiar application programming interfaces and also an incorporated web
interface [6].
Convnet.js represents a deep learning library developed in Javascript, having as a unique
characteristic the ability to develop deep learning neural networks using solely an internet
browser. Convnet.js has the same software and hardware requirements as the browser,
being able to train and implement ANNs without a hassle. It can also be deployed using
the supplied Javascript source code file "node.js" in a server [10].
Another programming language useful in developing deep learning neural networks
libraries is Lua, developed in 1993. Although being used extensively in the videogames
industry, Lua has been used successfully in developing a wide range of popular
commercial applications, proving to offer a high-level of performance, efficiency, ease of
programming, a little consumption of resources.
Based on Lua, in 2002 was released Torch, a consistent framework that provides access to
algorithms optimized for the Compute Unified Device Architecture enabled GPUs, useful
in the machine learning field. The main aim of the framework Torch is to offer the
greatest extent of adaptability, reduced time in developing specialized algorithms with
minimum effort. There are a lot of popular social networks, search engines, universities
and research institutes that use Torch in their everyday activities [12].
A programming language designed for numerical computations is Julia, released in 2012.
Julia includes a complex compiler along with a comprehensive specialized mathematical
library, offering support for executing the tasks in parallel on multiple processing nodes.
This programming language was used in developing Mocha, a specialized framework for
deep learning neural networks, being influenced as a development model by the above
analyzed Caffe framework. Mocha implements specialized numerical solvers and tools
useful in training CNNs. The most important characteristics of Mocha are represented by
its modularity, complex interface, easiness in portability, being compatible with multiple
JavaScript assertion libraries [13].
Lisp represents an ensemble of programming languages, that dates back to 1958. It was
released one year after the Fortran programming language and it is still widely used today.
Initially Lisp was developed as a way of facilitating the mathematical notations in the
software programs and soon afterwards it began to be the language of choice for the
scientists in the artificial intelligence field. Lisp introduced for the first time many
programming paradigms. Based on Lisp, it was developed under the General Public
License the Lush object-oriented programming language targeting the scientific field. It
has a wide range of applications (machine learning, image and signal processing,
extracting knowledge from data, etc.) and can overcome the limitations of other
consecrated development environments [6].
The Programming language Haskell was released in 1990, targeting diverse applications'
domains without accepting to change the state of an object or to modify it after it has been
created. There are various implementations for Haskell released under the open source
license, some of them being compliant with the Haskell 98 standard ("Glasgow Haskell
Compiler", "Utrecht Haskell Compiler", "Jhc", "LHC" etc.) and other implementations are
not kept active or in maintenance anymore.

162
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

In Haskell, it was developed DNNGraph, a DNN domain specific language for


developing the network's structure. DNNGraph makes use of the different libraries like
the "lens library" and a graph oriented library "fgl" for defining the structure of the
network and also a series of optimization strategies. DNNGraph is able to generate files
compatible with the above analyzed Caffe and Torch frameworks [10].
In the extremely popular .NET framework it was developed Accord.NET that offers
artificial intelligence capabilities along with specific software libraries in the fields of
sound and graphics processing. Accord.NET offers the necessary tools to build
commercial applications. It offers to the developer a lot of ready-made templates and the
possibility to exchange and interchange the machine learning algorithms with ease [6].
A very popular programming language, released in 1993 under an open source General
Public License is R, a development environment that facilitates computation in the field
of statistics and image processing. R offers a CLI (command line interface) and there are
also available a few graphical user interfaces [10].
There are several frameworks and tools developed in the R programming language
designed for building DNNs, for example darch package and deepnet. The darch package
offers the possibility to train the neural networks in advance using the "contrastive
divergence method" and popular specialized algorithms that make detailed adjustments to
the networks' parameters. Another framework, developed based on the R programming
language is deepnet, that offers several DNNs architectures, specialized algorithms and
encoders [6].
In the next section, we present a series of strategies, useful in improving the software
performance of the deep learning neural networks implementations.

3. STRATEGIES FOR IMPROVING THE PERFORMANCE OF DEEP


LEARNING NEURAL NETWORKS IMPLEMENTATIONS

Deep learning neural networks can be successfully implemented in image, video, sound,
text recognition or processing and in obtaining accurate predictions in various scientific
fields, such as economy, mathematics, physics, neuroscience, medicine, pharmaceutical
industry. Most of the electronic means of payments and micropayments need solutions
that can identify, prevent and counteract frauds [14], [15]. Deep learning neural networks
offer new possibilities to secure the electronic means of payments, being able to identify
fraud faster and more accurately than the human factor. Used along with other algorithms,
deep learning represents a powerful tool for classifying, clustering and forecasting, based
on the input data (Figure 1).

163
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. A deep learning neural network that recognizes the traffic signs1

In Figure 1 it is represented a deep learning neural network that recognizes the traffic
signs. The neural network is trained using as an input a set of traffic signs. After the
network has been trained, it is put to the test using as an input a blurred traffic sign, as if it
were affected by poor meteorological traffic conditions when driving.
When developing large neural networks that have to process a lot of data in order to
obtain the results, a huge computing power is required. A single workstation is not
sufficient for this kind of computational volume. As a consequence, a distributed system
comprising multiple processing nodes (workstations) is an appropriate solution for
training and achieving the results in due time. The processing in parallel of data is easily
achieved in the case of deep learning neural networks as every input set that has to be
inferenced can be executed by a different node. Another method for attainting data
parallelism is to divide the data in several parts and allocate each part to more processing
nodes [1].

1
The figure has been created using the software tool Visio 2016, by inserting "Online Pictures" type elements,
tagged with reusable "Creative Commons Licenses".

164
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

One must note that it is much more difficult to process the data in parallel throughout the
training phase. For example, in the case of the scaled gradient descent one would obtain
better results if he allocates the steps that have to be computed to several processing nodes
of the distributed system. However, this approach cannot be achieved as the values
obtained at a certain step depend on the values obtained at a previous one.
In the scientific literature [1], [16], [17], it is proposed an asynchronous approach that
consists in allocating portions of the memory where the values reside among multiple
processing cores. Every variable is unlocked during the processing in order to assure a
concurrent access of the processing cores to the values. Nevertheless, this solution has the
drawback of diminishing the enhancement that should be obtained when progressing to a
new step of the algorithm as several processing cores can sometimes store new data over
the existing contents of a variable and thus cancel the progress up to that moment.
Although, an undisputed advantage of the asynchronous approach consists in the speedup
of the whole learning phase.
Other variations of this method are proposed in the literature that consist in handling the
values using a dedicated server [18]. The asynchronous approach implemented on a
distributed system yields notable results being at the forefront of the training process for
deep learning neural networks.
The most important aspect in off-the-shelf software is to reduce as much as possible the
execution time and the memory load when computing the results rather than when
training the neural network. It is not uncommon for a certain neural network to be trained
using high computational resources and afterwards to be implemented and put into use in
an environment where the hardware resources are more consumer-oriented.
In order to evaluate a specific developed model, it is often used a model compression
strategy, within which an initial model is replaced with another one, having a smaller size,
which requires a reduced amount of memory and offers the benefit of a reduced execution
time. This technique is suitable for the cases when the initial model has a big size. In this
case, several smaller models are designed and tested, finally replacing the initial one with
the set of models, thus obtaining the model with the smallest error of generalization.
Analyzing and assessing all the developed models could become an intense resource
consuming task [19].
In some situations, using only one model can yield better results if its size is large enough.
In the cases when the number of the existing training elements is reduced, one must use a
larger number of parameters than are required by the specific problem. After training the
neural network, one can simply obtain a new larger training set of elements by using it.
Afterwards, using these elements, one can train other models, having reduced sizes, that
offer very good results using as training sets, different subsets of the new larger training
set. The training data must be eloquently sampled so that the neural network can provide
correct results when it is being applied in a real word scenario [1].
Among the strategies for improving the performance of data processing systems, one of
the most important strategy consists in implementing a dynamic structure regarding the
graph that reflects the necessary computations for processing the set of input data. In the
deep neural network's case, the easiest way of implementing the dynamic structure

165
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

approach consists in allocating properly the group of machine learning models that are
suitable for a certain set of input data [20].
Another important method, useful in improving the performance and reducing the
execution time for a data processing algorithm that implements the classification task,
consists in training and using a sequence of such algorithms (classifiers), thus obtaining a
cascade approach. This strategy is suitable for the cases when one aims to identify with
high accuracy the occurrence of rare objects or events.
The Google search engine's researchers have implemented the cascade approach in many
situations, for example when one wants to transcribe the numbers of the addresses that
have been identified using the Street View technology. They have used an authentication
method that comprises two steps. In the first step, the process detects the location of the
address number using a specific machine learning model. Afterwards, in the second step,
another model is used in order to transcribe the desired number [1].
Due to the undisputable advantages and usefulness of the deep learning neural networks,
the researchers are concerned about discovering and implementing more effective
strategies for improving the performance of these networks. This field of research
represents an open topic and a permanent challenge for the scientists, information
technology (IT) specialists, mathematicians, engineers and economists worldwide.

4. CONCLUSIONS

Lately, the deep learning class of machine learning algorithms become more and more
popular among researchers in various fields, such as speech, audio, graphics or pattern
recognition, processing of natural language and bioinformatics. Also, a wide range of
architectures relying on this concept have emerged, like deep, convolutional deep, deep
belief, recurrent artificial neural networks.
In our paper, we have first introduced the main concepts related to the deep learning
neural networks and their state of art from the literature. Afterwards, we have analyzed
implementation aspects of deep learning neural networks, we have revealed and justified
their undisputed usefulness in solving a great variety of applications, highlighting the fact
that the requirements of the developed applications influence the implementations and
scale sizes of deep learning neural networks. We have paid a special attention to the
analysis and comparison of the most popular deep learning libraries/toolboxes available
today and the programming language in which they were developed. We have also
highlighted the most important strategies for improving the performance of deep learning
neural networks implementations.
Taking into account the usefulness of deep learning neural networks, the benefits that they
offer in various research fields, in industry, in economy, in IT and games industry, the
possibility of implementing these networks in the GPUs and employ their huge parallel
computational power, we can conclude that the deep learning neural networks represent a
functional, practical and efficient solution for successfully achieving outstanding results
in a wide class of domains.

166
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

REFERENCES

[1] Goodfellow I., Bengio Y., Courville A., Deep Learning (Adaptive Computation and
Machine Learning series), Publisher: The MIT Press, 2016, ISBN-10: 0262035618,
ISBN-13: 978-0262035613.
[2] Deng L., Yu D., Deep Learning: Methods and Applications (PDF). Foundations
and Trends in Signal Processing. 7 (3–4), pp. 197-387, 2013, DOI:
10.1561/2000000039.
[3] Lungu I., Pîrjan A., Petroşanu D. M., Solutions for Optimizing the Data Parallel
Prefix Sum Algorithm Using the Compute Unified Device Architecture, Journal of
Information Systems & Operations Management, Vol. 5, Nr. 2.1/2011, pp. 465-477,
ISSN 1843-4711.
[4] Petroşanu D. M., Pîrjan A., Economic considerations regarding the opportunity of
optimizing data processing using graphics processing units, JISOM, Vol. 6, Nr.
1/2012, pp. 204-215, ISSN 1843-4711.
[5] Padua D., Encyclopedia of Parallel Computing, Springer Publishing Company,
Incorporated, 2011, ISBN:0387097651 9780387097657, pp 689-697.
[6] Clarke D., Daoud’s Page on Github, 17 Great Machine Learning Libraries,
http://daoudclarke.github.io/machine%20learning%20in%20practice/2013/10/08/m
achine-learning-libraries, accessed on March 22, 2017.
[7] Tăbuşcă A., Learning a programming language for today, Journal of Information
Systems & Operations Management, Vol.9, No.1/2015, pp. 83-94, ISSN 1843-4711.
[8] Matthes E., Python Crash Course: A Hands-On, Project-Based Introduction to
Programming, No Starch Press, 2015, ISBN-10: 1593276036, ISBN-13: 978-
1593276034.
[9] Raschka S., Python Machine Learning, Packt Publishing ISBN-10: 1783555130,
ISBN-13: 978-1783555130, 2015.
[10] Teglor, http://www.teglor.com/b/deep-learning-libraries-language-cm569/, accessed
on March 22, 2017.
[11] Kaluza B., Machine Learning in Java, Packt Publishing, 2016, ISBN-10:
1784396583, ISBN-13: 978-1784396589
[12] Ierusalimschy R., Programming in Lua, Fourth Edition, Publisher: Lua.Org, 2016,
ISBN-10: 8590379868, ISBN-13: 978-8590379867
[13] Russel S., Sengupta A., Hanson L., Learning Julia: Rapid Technical Computing
and Data Analysis, O'Reilly Media, 2017, ISBN-10: 1491903600, ISBN-13: 978-
1491903605
[14] Pîrjan A., Petroşanu D. M., Dematerialized Monies – New Means of Payment,
Romanian Economic and Business Review, Vol. 3 Nr. 2/2008, pp. 37-48, ISSN
1842–2497.

167
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[15] Pîrjan A., Petroşanu D. M., A Comparison of the Most Popular Electronic
Micropayment Systems, Romanian Economic and Business Review, Vol. 3, Nr.
4/2008, pp. 97-110, ISSN 1842–2497.
[16] Bengio Y., Ducharme R., Vincent P., A neural probabilistic language model, in
Advances in Neural Information Processing Systems 13 (NIPS’00), pp. 932–938,
MIT Press, 2001.
[17] Recht B., Ré C., Wright S.J., Niu F., Hogwild: A lock-free approach to parallelizing
stochastic gradient descent, Advances in neural information processing systems 24
(NIPS 2011), Curran Associates, Inc, Red Hook, NY, USA, pp. 693–701, 2011.
[18] Dean J., Corrado G., Monga R., et al., Large scale distributed deep networks, In
Proceedings of Neural Information Processing Systems (NIPS), 2012.
[19] Bucilua C., Caruana R., Niculescu-Mizil A., Model compression. In: KDD ’06:
Proceedings of the 12th ACM SIGKDD international conference on Knowledge
discovery and data mining, New York, NY, USA, ACM (2006) pp. 535–541, 2006.
[20] Bengio Y., Deep learning of representations: Looking forward, in Statistical
Language and Speech Processing SLSP 2013, Lecture Notes in Computer Science,
vol. 7978, Springer, Berlin, Heidelberg, pp. 1-37, 2013, DOI: 10.1007/978-3-642-
39593-2_1.

168
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

ON IMAGE RECOLORING - COLOR GRADING

Diana-Maria Popa 1*

ABSTRACT

This paper analyzes an example-based color transfer algorithm. The process of


recoloring an input image by making use of another image has many applications in
computer vision. This is the case of digital restoration technique that tries to reapply
color to paintings that have been deprecated by dust, smoke, passing of time. Throughout
history, there have been many ideas proposed regarding color transfer computation.

KEYWORDS: Color Conversion, Color Mapping, Color Reduction

1. INTRODUCTION

The focus of this paper is to analyze and implement an algorithm that recolors an input
image based on a palette image given by the user [2] [3]. The technique of transferring the
feel of a second image to a source image, by keeping the dynamics of the initial one, is
known as color grading. Basically, the color of an input image is adjusted in order to
match illumination and atmosphere in the palette given also as an input, and thus
obtaining an output image out of the combination of the two. This technique represents
one of the fundamental process in film grading. Most of the time this issue is fixed by
professional video post-production experts manually. The color of an image can be so
important that special techniques were developed in order to correct slight aberrations,
using a predefined target [25]. A recent article published by ‘The Wall Street Journal’ in
2011 [20], lists the Video Post-Production Services in the ‘Top 10 Dying Industries’:

Figure 1. Example-based recoloring

1
* corresponding author, Engineer, Politehnica University of Bucharest, 060042 Bucharest, Romania,
diana.popa@cti.pub.ro

169
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

“…The widespread adoption of digital media have adversely affected the industry’s range
of services, from editing and animation to archiving and format transfer.”
This operation has also been referred throughout as example based re-coloring stylization
[10]. The idea of example-based recoloring is better described by Figure 1. The first
‘Gladiator’ picture has to undergo a transformation so that its colors match the palette of
the second Gladiator picture, regardless of what the first one may contain.
This paper concludes the work done in the license thesis presented at the faculty of
Automatics and Computers from the “Politehnica” University of Bucharest by the author
of the paper [28] and continues the work presented in [30].

2. RELATED WORK

Transfer of color statistics

One of the most popular methods proposed was that of Reinhard [4] whose strategy was
to choose a suitable color space and then to apply a simple operation. By suitable color
space, Reinhard meant that they had to focus on an orthogonal color space without
correlations between the axes, which means that there is very little dependence between
the color information. One color space that meets all those requirements is the Lab space
which firstly, was developed by Ruderman [5] based on data-driven human perception
research that claims the human visual system responds the best at processing natural
scenes and secondly, it is a space that minimizes correlation between channels. The
Reinhard method is based on transferring the mean and standard deviation of data points
in the Lab space between images along each of the three axes. The advantage of this
technique is its simplicity, as it requires only two images to compute the desired result, its
efficiency, as the computing time is less than a few minutes or even shorter. Third, it is
semi-automatic, demanding very little user intervention. There are also disadvantages to
this Reinhard algorithm as it is very dependent on how much the source image and the
target image had in common in order to obtain acceptable results. All in all, the Reinhard
method remains the first fast and robust color transfer solution for the image processing
field and this is the reason why his work is very often referred to in other related research
as ‘pioneering work’.

Figure 2. Color transfer between images (image taken from Reinhard [4])

170
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Too go deeper intoi the currenntly existent color


c transferr methods, [6]] describes a method for
Immage Sequencce Color Trannsfer (ICST algorithm) wheere the user iss requested to provide an
innput image and d three target images, an innteger N, indiccating the num mbers of imag ge sequence
thhat are to be reendered. Basedd on all these inputs, the IC CST method iss able to rendeer an image
seequence that illlustrates charaacteristics from
m all three tarrget images proovided.

F
Figure 3. The input
i image, thhree target imagges, and the reesultant basis im
mages in ICST
T algorithm
(imagee taken from [66])

Thhese two metthods describeed above [5, 6] have a lim


mitation: lineaar transformattions. In all
thhe practical situations
s whhere color transfer is neeeded, all thee recoloring techniques
reequire non-lin
near color mappping.
O
One way to perrform non-lineear mapping is i histogram m matching, som metimes called d histogram
sppecification orr histogram noormalization, which is a baasic signal proocessing techn nique [7]. It
baasically referss to the transfoormation of one
o histogram m into another by remapping g the signal
vaalues so that it is used too adjust the statistical
s proofile of a dynnamic range. Histogram
sppecification haas a wide varriety of fieldss in which it m makes itself useful:
u compu uter vision,
reemote sensing g, medical imaaging, speechh recognition, scientific visuualization. It can
c also be
ussed for fast im
mage retrieval, for example searching in a large set off images for th he ones that
beest resemble ana input imagge. When it coomes to imagge processing,, histogram sp pecification
is primarily a means of obtaining
o imaage enhancem ment for visuual inspection n. Contrast
ennhancement makes
m content of images bee easier to diffferentiate, thuss more disting
guishable.
A more compleex type of maapping is treatted in [8] wheere the color histogram
h equualization is
peerformed via the
t deformatioon in color spaace of a mesh in order for it to fit the histo
ogram of an
im
mage and then by applying some
s specific equation to m
map it to a unifform histogram m.

Figu
ure 4. Color Styyle Transfer Teechnique (imagge taken from Neumann [9])

171
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Another color histogram matching, which is also simpler from an implementation point of
view and also fast in a computational way is described in [9]. The novelty in this case is
introduced by the use of basic perceptual approaches which are: hue, lightness and
saturation instead of approaching the classical way of using opponent color channels.

Solutions to content variations

The main problem of color transfer based techniques is the fact that content between input
image and palette image may vary quite a lot and thus they do not work well together. For
example, if the input image has more sky in it than grass and the palette image has more
grass than sky, then the computation of color transfer statistics is expected to fail. One
solution to this problem is to select different samples (swatches) of grass and sky from the
two images and compute the statistics separately and then render them back together in
Lab space [4]. However, this is not a very practical solution as it would be very time
costly to perform this type of image segmentation in the case of a big images and almost
impossible to apply this type of technique when it comes to sequences of images.
A more interesting method was described in [10] where the human color perception is
taken into consideration. Each pixel value is classified into one of eleven basic color
categories. These categories are derived from psychophysical experiments and are
universal to every human being. By using these categories, the mapping of color is more
prone to look undistorted and to be free of unnatural results.
Another way to avoid the problem of content variations when it comes to transferring the
color between images is to make use of the spatial characteristics. This technique is
widely used when transferring color from one colored image to a greyscale one [11].
Given that the greyscale image Is represented by a 1D distribution, only the luminance
channels can be matched and taking into consideration that a single luminance value
could represent multiple regions from an image, another criteria that is left to be used in
order to guide the matching process is the statistics within the pixel’s neighborhood.

Figure 5. Colors are transferred based on neighborhood statistics (image taken from [11])

As for the method of color transfer that is presented in this paper, the technique is based
on an exact color transfer technique and the refinement part that deals with content
variations is based on a re-graining process that will be further exposed. Pitié et al. [3]
developed a method that succeeded in automating the process of color transfer by using
color distribution transfer which represents the basis of the algorithm, and implicitly of
the framework that will be presented in this paper.

172
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

3. RECOLORING ALGORITHMS – THEORETICAL PORTRAYAL

Automated color grading using color distribution transfer

Definition and concepts


Probability Density (Distribution) Function – PDF
By definition, a random variable X has the property of being continuous if the values that
it takes can form a “continuum”, meaning that the possible values that X can take are
represented by a single interval on the number line ( ∀ A, B | A < B, X can be any number
between A and B) or that the possible values are formed by a union of disjoint intervals.
Moreover, P(X=x) = 0 for any number x that is a possible value (solution) of X. The
function F(x) = P(X ≤ x) is called the cumulative distribution function (CDF).
One of the methods to obtain the PDF of some continuous values (that is also the one used
in the algorithm proposed in this paper) is by plotting the histogram. Basically, that means
recording the temperature using time slices. The smaller the slices, the finer the
approximation will be, but the quantity of data needed will be greater and the finer the
curve that will be obtain based on the data gathered.
Definition: Let X be a continuous random variable. Then a probability distribution or a
probability density function (PDF) of X is a function f(x) such that for any two numbers
a and b with a<b,
b
P(a ≤ X ≤ b) = ∫ f ( x)dx (1)
a

In more informal terms, the probability that X has some value in the interval [a,b] is
represented by the area under the graph and above the mentioned interval. The term of
density curve is often used to refer the graph of f(x). We can also note from the equation
(1) that the relation (2) is also valid, where f is the PDF and F is the CDF.
f ( x) = F ' ( x) (2)
Basically the histogram is a continuous set of rectangles that have the width the size of the
step and the length is represented by the amount of values from the input data set that fit
into that specific bin (characterized by an inferior limit and a superior limit).

Color Distribution Transfer

Mathematical View
In the context of the results that we seek to obtain, the color distribution transfer is based
on the transmission of the image statistics (presented above) of the target image (palette)
onto the source image, thus obtaining a new image that combines the characteristics of the
two (preserves the content of the initial picture and mangles it with the feel of the target).
Following the example of the temperature given above in order to describe the PDF, we
can consider the input images as being two sets of random tuples. We say tuples and not

173
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

variables, because these tuples are the representation of each pixel of the image which we
will consider, for transparent use, to be N-dimensional.
Let’s denote these two sets as {s i }i≤S and {t i }i≤T where S and T are the cardinals (number
of total pixels) of the two images (i.e. source image and target image), respectively. One
tuple of a RGB color image is 3D dimensional and could very well be represented by:
s i = ( Ri , Gi , Bi ) .
Problem statement: Find a differentiable bijective mapping function m that takes as
input the tuples of the input image and outputs a new set of tuples whose statistics match
the statistics of the target tuples.
It is important to notice the problem above is expressed for the continuous case. In the
case of image datasets, which are represented by discrete values the solution to this
problem is given by making use of the histograms of the two set of tuples. The continuous
probability density function (see 3.2.1) that is sought to be used will be replaced by
histograms and these will be the one being transferred between the images. This problem
was referred in the specialty books as the Distribution Transfer Problem [3].
One other aspect to be noted down is that, in contrast to the greyscale algorithm presented
in the previous paper, in the case of color transfer it suffices to use the RGB color space in
the manipulation of the input data sets. There is no use in transforming it to
Lab, Lαβ , Luv or any other color space because this algorithm tries the transfer the
statistics of every color channel simultaneously. In the case of greyscale conversion, only
the luminance channel was being preserved through the manipulation.

The One-Dimensional case


The first step to solving the above mentioned problem is to first consider the 1D case,
which means to analyze the case where the tuples that form the set of the images are of
dimension 1 (the pixel are represented by one channel value).
Considering all the above definitions, we can now transpose the problem into
mathematical equations:
Matching the target PDF g with the source PDF f
f ( s )ds = g (t )dt (3)
Finding a mapping function m that maps source to target
m( s ) = t (4)
Integrating the (3) equation gives us:
s t
∫ f ( s)ds = ∫ g (t )dt (5)

Making use of equation (4) results in:

174
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

s m( s )
∫ f ( s)ds = ∫ g (t )dt (6)

Going back to the definition of the cumulative distribution function, and denoting the
CDF notations for f as F and for g as G , we obtain the mapping function which
represents the solution to the initial problem statement.
G(t ) = F ( s) => G(m(s)) = F ( s) => m( s) = G −1 ( F (s)), ∀s ∈ ℜ (7)

The N-Dimensional case


In order to upgrade the solution to a higher dimensional situation, Pitié et al. in [3]
invoked the example of the Radon Transform. This Transform, which has been called in
the specialty books as the shadow transform (or the X-Ray transform), maps an N-
Dimensional function onto one-dimensional lines or axes [21]. In order to make it clearer
why the authors used the Radon Transform as a solution to the problem, some concepts
and definition have to be introduced. In order to do that, Figure 6 will be used in order to
better describe some basic notions that lie at the core of understanding the Radon
Transform: line integrals and projections.

Figure 6. A color sample f ( x, y ) and its projection Pθ (θ , t ) for an angle of θ

Let’s take the example of a color sample from an image which is represented in Figure 6
by a two dimensional function f ( x, y ) . The line integrals are denoted by the parameters:
θ (the angle of inclination from the main axis) and which represents the distance from the
origin. The notation Pθ (θ , t ) represents the Radon Transform and, as it can be seen from
Figure 6, it is used to calculate the projection of the multi-dimensional function onto some
axis at angle θ.

175
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

This is what inspired Pitié et al. in [3]. The authors based their theory on the fact that after
projecting the N-Dimensional PDF function, the result will be a sequence of one-
dimensional projection results. Manipulating these series of results for a number of
iterations, the one-dimensional projection results of the source PDF, denoted with f
throughout the paper, will finally match those of the target PDF g . In fact, in specialty
books this one-dimensional projection of a N-Dimensional function onto some axis has
been referenced as marginal as it can be seen in Figure 7 which is very suggestive for
what the logic of the algorithm is. The algorithm has been referenced in specialty books
[3] as the Iterative Distribution Transfer (IDT).

Figure 7. Iterative Distribution Transfer (IDT) based on one-dimensional PDF transfer (image
taken from [3])

Let’s consider now all that has been said in theory above and analyze how the present
algorithm makes use of that. The equivalent of the 2-Dimensional function from above
will be the two 3-Dimensional PDFs functions: f and g . In order to project them onto
some axis, [3] proposes to use an orthogonal Rotation matrix. This rotation matrix is
denoted by R = [e1 ,..., e N ] , where N=3 and ei are the rotated axis from the theoretical
portrayal from above. From mathematics, the projection of same tuple s = [ s1 ,..., s n ]T (the
input sample) is obtained by multiplying some rotation axis with the sample.
e T s = ∑ ei s i (8)
i

After projecting the two PDFs, next step is finding that one mapping by which all one-
dimensional marginal projections of the source PDF and all those of the target PDF
match. This will be done exactly like in the one-dimensional case, but for convention, we
will consider the projection of the PDF f to be f e and that of the PDF g to be g e . By
association, the CDF F is Fe and G and Ge . Thus, equation (8) becomes:

176
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

−1
me ( s ) = Ge ( Fe ( s )), ∀s ∈ ℜ (9)

After finding the mapping function me in the same way as for one-dimensional case,
which is through discrete look-up tables, there remains only one step, to add to the initial
set of samples (tuples) the displacement along the axis, which is suggested by final step in
Figure 7.
s = s + ( m e (e T s ) − e T s )e (10)

Implementation
The implementation of the algorithm described in the next chapter was based on the
theoretical portrayal of the Iterative Distributed Transfer proposed by the authors in [3].

Generation of rotation matrixes


The algorithm that lies at the core of the framework starts by generating the rotation
matrixes that were to be used in order to get the projections of both the PDFs, f and g. The
way they were generated was influenced by the analysis of convergence that is given in
[3]. In this chapter, it is mentioned that in order to obtain convergence of the algorithm,
which in simpler words mean that at the end of the computation the marginal of the input
PDF matches the marginal of the target PDF, it is sufficient to use random orthogonal
bases when considering the rotation matrixes. Thus, it was started with a matrix that was
known to be orthogonal. In order to obtain new orthogonal rotation bases Gram-Schmidt
algorithm was applied on some initially randomly generated matrix and then multiplied
these rotation matrix with the initial one. Taking into account the definition of an
orthogonal matrix, the result will also remain an orthogonal matrix. As per the algorithm
presented above, the number of iterations to be applied on the two one-dimensional PDFs
is equal to the number of orthogonal matrixes generated, as in every iterations we look to
obtain the projections on every orthonormal axes. An arbitrary number of 20 iterations
was used as per the analysis of convergence described in [3]. The user, however, can
modify it and observe the difference in the output results as will be show in latter chapters
of this paper.

PDF matching
The next step of the algorithm was to actually make use of the orthogonal matrixes
created above and match the input PDF with the target PDF. In the process of creating a
probability density function (PDF) for an image, the first step is to create a histogram for
every color channel. This was the second most intensively computational part of the
program. The histograms were computed by firstly finding the minimum value and
maximum value of the input data sets of both images. In order to make the partition of
this interval, an arbitrary number is chosen in the framework related to this paper. The
above mentioned interval is then divided in equal slices and also the bins are labeled with
their inferior limit and superior limit. This way it is possible to place every value from the
initial input set in its corresponding bin. While doing this each bin’s counter increases
accordingly. Resulting from this manipulation is an array of structures (that basically
represented the bins) with the length an arbitrary number that contained the number of

177
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

values from the data sets in an ascending order. A bin of the histogram revolves around
some color value and stores the relative frequency for it.
As per the theoretical portrayal of the IDT algorithm described above, the next step was
represented by the one-dimensional PDF transfer. This was done for every RGB channel,
as the algorithm expects to receive as inputs two color (RGB) images. In order to get
smooth results, the single PDF transfer was done by using linear interpolation. The logic
of the function that performs linear interpolation basically solves equation (8) by making
use of the two histograms of the source and palette images which are received as inputs.
The linear interpolation used in the algorithm implementation plays the role of a discrete
look-up table technique. As per the equation that was earlier mentioned, the linear
interpolation function is called twice: the first time is called for the F ( s ) component of the
equation. What happens is that the probability of a color value is searched based on two
adjacent bins in the histogram (linear interpolation). Upon retrieving the results, the
−1
function in question is called for G ( F ( s)) . In the last case, the technique it is actually
called reverse look-up table technique, as this time the probability of a color value is
obtain based on the quintile (the inverse function of G is applied). It is important to make
the observation that without linear interpolation the resultant image would only contain as
many colors as the number of the bins used, by default the algorithm uses 300, but it can
be varied by the user.

Results and discussion

The following analysis is based on the two images below. The first one is taken from [22]
and the second one from [23].

Figure 8. Source image (left) and target image (right) for color based transfer algorithm analysis

By default the number of iterations is 20 and the bin count is 300. The execution time will
also be analyzed in each scenario. The first thing that is worth observing is the influence
of the iterations parameter. Figure 9a represents the output when the executable received
an input parameter of 5, Figure 9b is run with a parameter of 15 iterations and Figure 9c is
run with a parameter of 20 iterations. It is worth describing how much of a visual
difference can be noticed between the first two pictures and how little the last two vary in

178
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

resemblance. This proves that the algorithm converged earlier than 15 iterations and that
continuing to increase the number of iterations would have no further visual impact.

a b c

Figure 9. Results for color based transfer algorithm

It can be said that the results proved to be satisfactory but the overall performance of the
algorithm could use some improvement. Table1 below shows how the time varied with
the number of iterations.
Table 1. Performance of example-based color transfer

Iterations 2 5 10 15 20

Time 34.86s 83.86s 159.55s 240.75 318.33s

4. CONCLUSIONS AND FUTURE WORK

Image recoloring plays an important role in digital media technology as it finds itself in a
continuous and fruitful growth, as can be seen for example in the increasing popularity of
digital cameras.
Throughout this paper it has been proven that a simple application that can mimic in a
minimal way the behavior of such an image editing tool can be created even with very
little open source and third-party support . The framework developed through this paper
has several advantages.
Firstly, the framework offers some level of manipulation of the output images. The color
transfer executable implemented in this paper gives the user the possibility to vary its
iteration parameter. This parameter has a direct impact on how much of the feel,
atmosphere of the palette is being transferred to the original picture.
As far as improvement to performance results, the algorithm for color transfer between
images could use from finding a solution that deals with content variation. This means
reducing the graining appearance that can appear when the dynamic of the color in the
source image and target image differs a lot. Another issue that could be further improved
and analyzed would be the way the rotation matrixes are being generated in order to
obtain an optimal sequence. This plays a major role in the quality of the resultant matrix
because basically the finer the choice of axis is the bigger the probability of the mapping
function to find the best match.

179
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

5. REFERENCES

[1] Gooch, J. Tumblin, B. Gooch and S. Olsen. Color2Gray: Salience Preserving Color
Removal. In ACM Transactions on Graphics, Volume 24, Issue 3, 2005, pp. 634-369.
[2] F. Pitie, A. Kokaram and R. Dahyot. Towards Automated Colour Grading. In IEEE
European Conference on Visual Media Production, November 2005.
[3] F. Pitie, A. Kokaram and R. Dahyot. Automated Colour grading using colour
distribution transfer. In Journal of Computer Vision and Image Understanding,
February 2007.
[4] E. Reinhard, M. Ashikhmin, B. Gooch, P. Shirley. Color Transfer between Images. In
IEEE Computer Graphics Applications, Volume 21, Issue 5, 2001, pp. 34-41.
[5] D. Ruderman, T. Cronin, C. Chiao. Statistics of cone responses to natural images:
implications for visual coding. In Journal of the Optical Society of America,
Volume 15, Issue 8, 1998, pp. 2036-2045.
[6] C.M. Wang, Y.H.Huang. A novel color transfer algorithm for image sequences. In
Journal of Information Science and Engineering, Volume 20, Issue 6, 2004, pp.
1039-1056.
[7] M. Grundland, N.A. Dodgson. Color histogram specification by histogram
warping. In Proceedings of the SPIE, Volume 5667, 2004, pp.610-624.
[8] E. Pichon, M. Niethammer and G. Sapiron. Color histogram equalization through
mesh deformation. In IEEE International Conference on Image Processing, Volume
2, 2003, pp. 117-120.
[9] L. Neumann, A. Neumann. Color style transfer techniques using hue, lightness and
saturation histogram matching. In Proceedings of Computational Aesthetics in
Graphics, Visualization and Imaging, 2005, pp. 111-122.
[10] Y. Chang, S. Saito, K. Uchikawa, M. Nakajima. Example-based color stylization of
images. ACM Transactions on Applied Perception, Vol. 2, Issue 3, 2005, pp. 322-345.
[11] T. Welsh, M. Ashikhmin, K. Mueller, M. Nakajima. Transferring color to greyscale
images. Proceedings of ACM SIGGRAPH, San Antonio, 2002, pp. 227-280.
[12] R. Bala, R. Eschbach. Spatial color-to-grayscale transform preserving chrominance
edge information. In Color Imaging Conference, 2004, pp. 82-86.
[13] L. Neumann, M. Čadík and A. Nemscics. An efficient perception-base adaptive
color to gray transformation. In. Proc. Computational Aeesthetics, 2007, pp. 73-80.
[14] K. Smith, P. E. Landes, J. Thollot and K. Myszkowski. Apparent Greyscale: A
simple and fast conversion to perceptually accurate images and video. In Computer
Graphics Forum (Proc. Eurographics 2008), Volume 27, Issue 2, 2008, pp. 193-200.
[15] Y. Nayatani. Simple estimation methods for the Helmholtz-Kohlrausch effect. In
Color Research and Application, Volume 22, Issue 6, 1997, pp. 385-401.

180
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[16] K. Yongjin, J. Cheolhun, J. Demouth and S. Lee. Robust Color-to-Gray via Nonlinear
Global Mapping. In ACM Trans. Graph, Volume 28, Issue 5, 2009, pp. 1-4
[17] H. Lekowitz. Color theory and Modeling for Computer Graphics, Visualization,
and Multimedia applications. In Kluwer Academic Publishers, 1997, pp.55-58.
[18] E. Reinhard and T. Pouli. Colour spaces for colour transfer. In Proceedings of the
Third International Conference on Computational Color Imaging, Volume 6626,
2011, pp. 1-15.
[19] M. Grundland, N. Dodgson. Decolorize: fast, contrast enhancing, color to grayscale
conversion. In Pattern Recognition, Volume 40, Issue 11, 2007, pp. 2891-2896.
[20] http://blogs.wsj.com/economics/2011/03/28/top-10-dying-industries/. “Top 10
Dying Industries - Real Time Economics – WSJ”, Accessed on: 27 Jun 2014.
[21] http://www.mathworks.com/help/images/radon-transform.html, “Radon Transform
– Matlab & Simulink”, Accessed on: 30 Jun 2014.
[22] Van Gogh, Vincent. Olive Trees. 1889. The Minneapolis Institute of Arts,
Minneapolis. http://www.vangoghgallery.com/catalog/Painting/360/Olive-Trees-
with-Yellow-Sky-and-Sun.html. Accessed on 1 Jul 2014.
[23] Van Gogh, Vincent. The Olive Trees. 1889. The Museum of Modern Art, New
York. http://www.vangoghgallery.com/catalog/Painting/359/Olive-Trees-with-the-
Alpilles-in-the-Background.html. Accessed on 1 Jul 2014.
[24] Andrei Tigora, Costin-Anton Boiangiu, “Image Color Reduction Using Iterative
Refinement”, International Journal of Mathematical Models and Methods in
Applied Sciences, Volume 8, 2014, pp. 203-207
[25] Costin-Anton Boiangiu, Alexandru Victor Stefănescu, “Target Validation and
Image Color Calibration”, International Journal of Circuits, Systems and Signal
Processing, Volume 8, 2014, pp. 195-202
[26] Costin-Anton Boiangiu, Alexandra Olteanu, Alexandru Victor Stefanescu, Daniel
Rosner, Alexandru Ionut Egner (2010). „Local Thresholding Image Binarization
using Variable-Window Standard Deviation Response” (2010), Annals of DAAAM
for 2010, Proceedings of the 21st International DAAAM Symposium, 20-23
October 2010, Zadar, Croatia, pp. 133-134
[27] Costin-Anton Boiangiu, Andrei Iulian Dvornic. “Bitonal Image Creation for
Automatic Content Conversion”. Proceedings of the 9th WSEAS International
Conference on Automation and Information, WSEAS Press, pp. 454 - 459,
Bucharest, Romania, June 24-26, 2008
[28] Diana-Maria Popa, “Recolorarea Imaginilor”/”Image Recoloring”, License Thesis,
Unpublished Work, Bucharest, Romania, 2014

181
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[29] Costin-Anton Boiangiu, Ion Bucur, Andrei Tigora - „The Image Binarization
Problem Revisited: Perspectives and Approaches”, The Proceedings of Journal
ISOM Vol. 6 No. 2 / December 2012, pp. 419-427
[30] Diana-Maria Popa - “On Image Recoloring – Part 1: Correct Grayscale
Conversion”, The Proceedings of Journal ISOM, Vol. 10 No. 1 / May 2016 (Journal
of Information Systems, Operations Management), pp 222-234.

182
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

TETRA SYSTEM - OPEN PLATFORM -


INTEROPERABILITY AND APPLICATIONS

Claudiu Dan Bârcă 1*

ABSTRACT

Digital Terrestrial Trunked Radio System - (TETRA) is a professional system


standardized by the European Telecommunications Standards Institute (ETSI). In this
paper, TETRA technology – an open and inter-operable platform intended in particular to
ensure the communications needs of the public safety services in order to ensure order
and public safety (Public Saftey Sector-PSS) and prompt response to disasters (Public
Protection and Disaster Relief -PPDR). Also the telecommunications industry worldwide
has recognized the opportunity and perspectives which standard Tetra offers as a high
quality solution for unified communications to different entities.

KEYWORDS: Tetra, Interoperability, Wireless Communication, Public Safety,


Emergency

INTRODUCTION

Any emergency situation management is generally a common and continue process


involving persons or communities, in order to avoid or diminish the effects of an eventual
disaster impact [1].
The public security which complies with a series of tasks implies with a higher need of
telecommunication services, especially during the move condition [2].
Some of these services challenges are:
- in case of disaster – providing the communication over a wide area along with the
unitary communications between the different entities (police, fire, rescue) it is necessary;
- car crash in highways/tunnels – in which, because of the large number of traffic
communication applications, the public mobile phone services are overstretched, thus
relaying/blocking a rapid response from the various emergency crews;
- the intervention for restoring the public order – inquires special operations undergoing
the necessity of protected and encrypted communications from the security forces.
Over the world, regarding the majority of public safety and security structures, the
communication is vital. So, in the last years, the communications between these structures
have demanded services well above the conventional radio network capacity used.

1
* corresponding author, Assistant PhD, Faculty of Computer Science for Business Management, Romanian -
American University, Bucharest, barca_dan@yahoo.com

183
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Thereupon, removing the conventional, analogic systems in favour of digital radio


networks demands a complex process, throw technical and commercial aspects.
Although each country is characterised by its specific situation, some dispositions are
general:
- the solutions must have an open standard above the specific, individual solutions -
one of the standardization results is the equipment’s interoperability from
different manufacturers, thus being vital for the network users and operators.
- the use of trunked digital radio systems (Terapol, Tetra): by comparison with the
parallel systems, the trunked digital systems offers a series of advantages (a better
signal propagation and use of frequency spectrum, an increased resistance at the
interface, its suitable for the integrating of various type of information, with the
possibility of being efficient encrypted.
- the achievement of national professional radio networks, in which the network
infrastructure (the commutation centres, official stations) is available for a large
number of users. Each user works normally in his own virtual network and
connects with other ones, along with other entities, if it is necessary.

TRUNKED DIGITAL NETWORK

Two technologies for the emergency situations communication systems are available in
the EU: TETRAPOL and TETRA

TETRAPOL communication system


Born in France from the agreement between the French Gendarmerie and The National
Guard Company Matra Communications [3]. This technology is responsible for the
national public safety communication network in France.
The Tetrapol wasn’t adopted by ETSI (The European Institute for the
Telecommunication Standards), but different companies reunited and gave born to a
forum for the technology development, where specification according to the ETSI
standards were elaborated, in order to disseminate technic information [3].
The Tetrapol system is worldwide known and represents one of the digital radio options
for the military or public security. In according with the Tetrapol forum, 83 Tetrapol
network are available worldwide, that means 850000 radio equipment’s are serving 460
million people in an area of 11000000 km2 [3].

Technical specifications
The Tetrapol system uses the Frequency Division Multiple Access technology (FMDA),
with a narrow band channel, of 12.5 kHz AND Gaussian minimum shift keying
modulation (GMSK), in the frequency range 70 – 520 MHz [1]. In Europe, Tetrapol is
present in the 380 – 400 MHz band for the channels of the public safety and military
services for national security. Thus, the Ultra High Frequency band (UHF), with a 10 kHz
or 12.5 kHz and a distance of 5 MHz in the duplex module is used. The Tetrapol base
stations are connected to the fixe public network, the mobile one or the IP one.

184
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The scheme of a Tetrapol network is presented in figure 1.


It can be noticed the interoperability of the Tetrapol network with the IP interfaces
available for each network unit. So, by integrating a Tetrapol network, it must be adopted
the mechanism for sharing the logic channels into the IP platform and between the
network units.

Figure 1 The generic scheme of a Tetrapol network [3]

The Tetrapol system complies with the PMR (Professional Mobile Radio) requirements
and ensures an efficient radio coverage among the costs for the large networks, extended
in densely populated or inaccessible areas. It offers an infrastructure capable of
interconnecting national in cross border multinationals.
Conceived as a national technology, TETRAPOL offers the possibility of a multi-source
acquisition using the TETRAPOL forum, which allows free access to patents for the
worldwide scientists, ensuring full interoperability of products from different sources.

The TETRA communication system


One of the most spread digital radio system is TETRA (Terrestrial Trunked Radio). This
standard was approved and defined by The European Telecommunication Standards
Institute (ETSI), as being the only European official standard for professional radio
communication (PMR).
It is a global standard for radio communication in the same manner that GSM is the
standard for the mobile phones. It provides regular and professional cells services: group
communications, field workers management services (dispatching) and efficient data

185
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

services. It is a unique combination of group vocal communication services, mobile phone


and data services, but special conceived for the authorities use.
In a TETRA trunked radiophone system, the radio channels are centralised and the system
automatically assigns channels available to users, at the beginning of each call. The
assigns from a common synoptic table are called trunking and the systems providing this
assign method are therefore trunked. TETRA uses the TMDA digital technology (Time
Division Multiple Access).
A TETRA channel width of 25 kHz takes four temporal slots or communication channels
figure 2.
The analogic systems are based on the frequency modulation and a channel takes a 12.5
kHz or 25 kHz width band. Consequently, TETRA is two to four times more efficient
than the analogic classical systems.

Figure 2 The TMDA communication for TETRA [4]

TETRA acknowledge three frequency bands. In Europe, the 380-400 kHz band is only for
the security problems and public safety, the 410-430 MHZ band for the professional radio
communication systems. Outside Europe, the 800 MHz band id the chosen one for the
TETRA system [5].
The TETRA standard (first version) was the first standard known as open-system for the
professional radio communication. It was developed by the European Telecommunication
Standard Institute (ETSI) along with the user organizations, in order to establish their well
functionality and rapidly adopted by the national administrations. The efforts for the
standardization are continued by achieving new characteristics, like the interconnecting
possibility with mobile communication standards, like GSM, GPRS and UMTS. Beside
standards for the network elements, another TETRA services and facilities are
standardized .Among these, the most important are:
• Advanced and fast dialling services - unencrypted and encrypted
• Individual calls - unencrypted and encrypted
• Fast service data - unencrypted and encrypted
• Packet data services - unencrypted and encrypted
The first version of the TETRA system (voice + data) provides a comprehensive portfolio
of services and facilities, but the increased demands of the users lead to the technology
evolution. Important events from the telecommunication industry among changes of the

186
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

market needs allowed a large number of services and facilities to be standardized, thus
being included into the second version of the TETRA system.
The new services permitted:
• The TMO coverage expansion (Trunked Mode Operation)
• The AMR vocal codec (Adaptive Multiple Rate)
• The MELPe advanced vocal codec (Mixed Excitation Liner Predictive)
• The TEDS advanced data services (TETRA Enhanced Data Service)
The TETRA radio networks can allow IP-over-TETRA and it is realized by the IP
gateway, available in most TETRA systems. This gateway allows the data and status
messages exchange between a TETRA terminal and a gadget app connected to the IP
network. Also, data packets are available on some TETRA systems, which allows the IP
data exchange between a gadget app connected to the TETRA terminal and a ruling app
on a server from Intranet [6,7,8.]. An IP-over-TETRA alternative is using WAP (wireless
Application Protocol) on TETRA terminals. A TETRA-over-IP system network topology
is very flexible. Basically, any topology is supported, including star-type topology, mesh
topology, ring topology and a combination thereof. It's just a matter of setting links and
routers in order to achieve the required topology. In most cases, a combination of these
topologies is the best choice.
TETRA-over-IP offers several advantages. One of the most important advantages is that
IP is used to connect all elements in a TETRA network. TETRA-over-IP thus provides a
single architecture for multiple purposes figure 3.

Figure 3 Tetra over IP (source - Mobile Broadband Communications for Public Safety-© 2015
John Wiley & Sons, Ltd)

187
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

INTEROPERABILITY FOR TETRA OPEN MARKET


TETRA is an open standard for digital PMR who created the foundations for markets with
several providers and introduces TETRA products of several manufacturers.
The aim is to ensure interoperability can use TETRA products - mainly of TETRA
terminals - on any network from any vendor and to facilitate the flow of critical
information. This will improve the timeliness and accuracy by reducing circuit necessary
duration of communication possible causes of interruption, and reducing the number of
people who may commit errors.
In Europe, standardization is the key word for almost all international activities closely
related to public safety communications. Internationally and technically there are three
main initiatives used to address communication:
• Police: police cooperation Schengen
• Standardization institutions: European Telecommunications Standards Institute
(ESTI)
• Institutions supporting European standard (TETRA MoU)
Interoperability between different networks and terminals traders is of great importance in
building large shared network. This allows increased flexibility and price products user
companies. To ensure interoperability, a neutral party interoperability tests and certificates
are awarded according interoperability profiles, as defined by Tetra MoU.
The technological development of the Tetra system contain: standardization,
interoperability tests Tetra MoU, manufacturing, final testing interoperability figure 4.

Figure 4 Tetra system technological developments [9]

TETRA SYSTEM - IMPLEMENTATION

TETRA is the only European standard for radio, and with this system, public safety
agencies have a wide range of communications and new operational opportunities.
TETRA provides secure voice and data transmission, communications allow coalition all
public safety agencies in a single system and also connects to telecommunication
networksIn addition to the transfer of voice data can be transmitted simultaneously. Also,
TETRA can transmit fingerprints and photographs of wanted persons by mobile
terminals.

188
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
MA T

Teetra system is implemennted not onlyy in Europe but in the UnitedU States, Canada,
A
Australia, New
w Zealand andd in many couuntries in Asiaa and Africa. [ 10,11,12]
Inn Europe figgure 5 Tetra networks arre implementted: Italy, Irreland, Polan nd, Greece,
Buulgaria, Portu
ugal, Sloveniia, Romania, Lithuania, V Vatican Statee, Island (reteeua IRJA),
Fiinland (VIVR RE), Hungarry (Pro-M), Sweden (Raakel), Norwaay (Nodnett)), Belgium
(AAATRID), Denmark(Tetra
D aNet), Austrria (Digitalfuunk), Netherllands(C2000)) Germany
(BBos Digitalfun
nk).

F
Figure 5 Tetra networks in Europe [10]

Te
Tetra applicatiions in Romaania
Inn Romania, mobile
m radioo communicaations infrastrructure is managed
m by thhe Special
Teelecommuniccations Servvice [www.stts.ro] and consists maainly in dig gital radio
coommunication n systems professional TETRA, T TETRAPOL and local networks
coonventional in
nfrastructure which
w providde voice and ddata mobility for state auth
horities.
Romania netwo
ork coverage Tetra (red) iss shown in figgure 6.

Figuure 6 Romania network coverrage Tetra [13]]

189
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Main applications are developed on this infrastructure:

‰ Intelligent management and control of urban traffic in Bucharest [14]


Components "Intelligent Transportation System" are:
-subsystem Urban Traffic Control(UTC)
-subsystem Public Transport management (PTM)
-subsystem Closed-Circuit television (CCTV)
They are complemented by the Control Center, which get information from all three
subsystems and managed to obtain data on traffic and prioritize its needs.

‰ Operating System in emergency management


This system was implemented as a result of the need to ensure radio voice services and
data to the emergencies. Its primary objective is to develop comprehensive services for
state authorities in charge of managing events and emergencies like 112 and eCall
[15,16,17], including in areas where there is currently no telecommunications services
offered by other operators.

‰ Radio system operable to public authorities


Common platform TETRA belongs to the following authorities: Ministry of Interior
(Police Romanian, Border Police, General Inspectorate for Emergency Situations,
SMURD, Romanian Gendarmerie), Romanian Intelligence Service, Ministry of Health
(Ambulance), Ministry of Finance (National Agency for Tax Administration, Ministry of
Defense, Service Protection and Guard and other authorities under the supervision or
control of the Government;

‰ Integrated System for Border Security of Romania


TETRA technology has been adopted in the Schengen Convention as the standard for
communication systems for law enforcement authorities, including police, in all European
countries. Since this technology is planned in border security, it was made a national
consensus on the development of common platforms. It will also provide mobile radio
communications requirements of all institutions of defense systems and authorities with
responsibilities in citizen safety.
TETRA common platform provide services required by the authorities. This is service
support secure voice and data services database query national and Schengen Information
System.
Romania has complied with commitments made in the negotiations with the European
Union. Border security project involved the purchase, installation and integration of
modern surveillance systems in order to obtain modern command and control centers for
the provision of over 180 operating locations of the border police. It made such an
efficient and systematic control and continuous surveillance of the border, especially in
sectors that will be the European Union's external borders. The project provides and
expand and develop IT and communication infrastructure necessary for cooperation

190
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

between all institutions and competent authorities at the border and with neighboring
countries (Hungary, Bulgaria, Moldova) principle figure 7.

Figure 7 Model of Cross Border Cooperation [2]

This scenario involves various nations or regions border geo-political and national
authoritiesThey are usually equipped with communication systems based on different
standards or operating in different frequencies. In this scenario, interoperability issues is
the main challenge, while traffic capacity is well planned.
Figure 8 present counties covered by the Phare or Integrated Border Security System
(IBSS) who have made investments in the development of Tetra. The system is completed
today.

Phare
IBSS

Figure 8 - Counties covered by the Phare or Integrated Border Security System

‰ Alert system in case of earthquake


Earthquakes are a major natural disaster. An early warning system was developed in
Romania to launch a warning in advance of 25-35 seconds (Bucharest city), in case of
earthquakes with magnitude above 6.5 (Figure 9). To reduce the losses caused by
earthquakes is very important to use advanced technologies. Earthquake Early Warning

191
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

System will enable natural and avoid economic losses in emergencies caused by
earthquakes. Information is the key point in the management of disasters and the Internet
is one of the most commonly used tools, and with reduced costs [18]. To send warning
signals to local authorities and civil protection units uses TETRA system.
Compared with GSM, DECT system has the shortest transmission information figure 10.

Figure 9 Wave propagation time [19] Figure10 Shortest transmission information

Such alarm systems are used before the arrival of the secondary wave, which leads to
improved reaction in case of an earthquake. All this information to alert transmitted using
secure systems between systems and users earthquake evaluation.

CONCLUSIONS

Digital Terrestrial Trunked Radio System - (TETRA) was introduced as the first open
standard for professional mobile radio equipment. Tetra was adopted in the European area
as a cross-border system and consequently, some states have used this technology in the
European Union - accession to Schengen chapter. Tetra is an inter-operable coverage
system with a fast response speed and a high degree of availability and security of
communications. Studies from the National Institute of Research and Development for
Earth's Physics showed that Tetra platform can be used also as an early warning system in
case of earthquakes.

REFERENCES

[1] Ramon Ferrús and Oriol Sallent- Mobile Broadband Communications for Public
Safety: The Road Ahead Through LTE Technology-2015 John Wiley & Sons, Ltd
[2] Baldini, G., Karanasios, S., Allen, D., & Vergari, F. Survey of Wireless
Communication Technologies for Public Safety. Communications Surveys &
Tutorials, IEEE, 16(2), 2013, 619-941.

192
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

[3] www.tetrapol.org
[4] TETRA Association, “PS and commercial services”, work document presented at
CEPT ECC FM49 meeting, November 2011.
[5] ETSI EN 300 392-1 V1.4.1 (2009-01), Terrestrial Trunked Radio (TETRA); Voice
plus Data (V+D); Part 1: General network design.
[6] Claudiu Pirnau, Mihai Alexandru Botezatu, Iuliu Stefan Grigorescu, “Databases
Role Correlated With Knowledge Transfer Between Entities Of A Cluster”, “Mircea
cel Batran” Naval Academy Scientific Bulletin, Volume XIX – 2016 – Issue 1,
Editura “Mircea cel Batran” Naval Academy Press, pg 476-483, Constanta,
Romania
[7] Barca Cristian, Barca Claudiu, Cucu Cristian, Gavriloaia Mariuca-Roxana,
Vizireanu Radu, Fratu Octavian , Halunga Simona - A Virtual Cloud Computing
Provider for Mobile Devices. Proceedings of the International Conference on
Electronics, Computers And Artificial Intelligence - ECAI-2013
[8] Claudiu Pirnau, Mihai Alexandru Botezatu -Service-Oriented Architecture (SOA)
and Web Services -Database Systems Journal vol. VII, no. 4/2016
[9] http://www.telekomunikacije.rs
[10] Phil Kidner-An introduction to Tetra in Europe –ENNA Conference2013
[11] Phil Kidner -Tetra-today-in-Europe : Tetra today issue 36, 2017
[12] http://www.tetratoday.com
[13] Vasilca, I.-S. (2012) “The STS National Network”, EU Emergency Services
Workshop,http://www.eena.org/ressource/static/files/_sts-national-network.pps
[14] www.uti.eu.com
[15] George Căruţaşu, Cezar Botezatu, Mihai Botezatu, Expanding eCall from cars to
other means of transport, Journal of Information Systems & Operations
Management, Vol. 10 No.2 / December, pag. 354-363, 2016, ISSN 1843-4711,
[16] Botezatu Cezar, Botezatu Cornelia Paulina, Carutasu George, Barcă Claudiu Dan -
eCall safety transportation management systems –features and capabilities, Annals
of the Oradea University, CD -ROM Edition, Volume VIII (XVIII) 1583-0691
[17] www.sts.ro
[18] Cristina Coculescu, Mironela Pirnau, Mihai Alexandru Botezatu, Processing and
interpretation of statistical data regarding the occurrence of earthquakes, Journal
of Information Systems & Operations Management, Vol. 10 No.2 / December, pag.
364-372, 2016, ISSN 1843-4711
[19] Constantin Ionescu- Earthquake early warning system and disaster management
concept for Romania-Bulgaria Cross-Border area-prezentare DACEA - "Sistem de
alerta in caz de cutremure pentru regiunea transfrontaliera Romania – Bulgaria"
(Giurgiu, 13 octombrie 2011)
[20] 123seminarsonly.com

193
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

FINANCIAL CONTROL IN AN IT ENVIRONMENT: WARRANT OF THE


FINANCIAL PERFORMANCE OF THE ENTITY

Alice Tinta 1*

ABSTRACT
Under a computerized society based on advanced knowledge, techniques and procedures
using comprehensive knowledge of the economic environment are being developed
quantitatively and qualitatively in the virtual, online environment. Using the online
environment as IT support for financial control is one of the most effective methods of
research in the transition from classical economics to smart economy, taking into account
the principles of sustainable development. The main purpose of the research is the
organization and implementation of financial control in a paperless environment,
bringing a major innovation in the practice of financial control based on uncertainty risk
factors (environment, strategy, leadership style), thereby contributing to typical practical
changes.

JEL CLASSIFICATION CODE: M41

KEYWORDS: Financial Control, IT Environment, Financial Performance, Entity,


Management Decision, Accounting Informational System.

1. INTRODUCTION

The need to develop an effective control system conforms to the paradigm of


development of economy and society, the ability to identify the best forms of measuring
the financial flows, speed and rate, according to changing risk factors in order to achieve
growth and a reasonable level of performance considered optimal. In the current
development of the economies, of the intensity of progress of research and
implementation of advanced technologies across the entire economic system, financial
control deems necessary to adapt to dynamic economies. Thus, implementing the
computerized system via online financial control, control practice requires innovation,
driven mainly by modifying risk factors (environment, strategy, leadership style).
On-line financial control involves on the one hand planning, performance evaluation and
coordination of financial activities aimed at achieving desired investment rentability, and
on the other hand the use of any means of information technology as a lever to support
managers in operating relations and other financial instruments in order to exercise
financial control. The need to use the means of information technology is evident in the

1
* corresponding author, PhD Lecturer, Romanian - American University, Bucharest, alicetinta@yahoo.com

194
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

context of globalization and the intensification of financial flows, access to information is


conditioned mainly by technical means, speed of response, understanding the hidden
meaning of information and risks contained by the information.
Rapid advances in technology provide economic entities and business environment with
an unprecedented economic boom, mainly implemented to reduce the widening gap
between strategy as part of its financial performance and business processes bearing risk
and uncertainty.
The advantage of implementation of financial control in an IT environment identifies
means of supporting the IT system via the digital economy, as a prerequisite for
streamlining the decision-making.

2. FINANCIAL AND ACCOUNTING INFORMATIONAL SYSTEM

Financial accounting information system (SIFC) is an information system that seeks


financial events that have occurred, as summarized in a report and financial information
obtained by the entity in the period under review.
In its basic form, a SIFC is little more than an accounting system configured to operate in
accordance with specifications and needs of the environment in which they are installed.
Generally, the term SIFC refers to the use of information and communications technology
in the financial operations to support management decisions and budget, responsible for
preparing financial reports and statements.
In the State sector, SIFC refers more specifically to the computerization of public
financial management (IMFP,) of processes of budget preparation and budget execution,
and reporting, with an integrated financial management in ministries agencies and other
public sector operations.
The main element that integrates in a computer system is a SIFC, which has a common
unique database, with all data expressed in financial terms. Integration is the key to any
successful SIFC and it assumes that the system has the following basic characteristics:
• standard data classification for recording financial events;
• internal controls over data entry, transaction processing and reporting;
• Common processes for similar transactions and a design system that eliminates
unnecessary duplication of data input.
Integration often applies only to basic financial management, supporting SIFC in an ideal
world, of coating, also the information systems with which the core system
communicates, like human resources, payroll and income (fiscal and customs)1.

1
Transparency International Source Book 2000, in Casals and Associates, “Integrated Financial Management
Systems Best Practices: Bolivia and Chile”, funded under USAID Contract AEP-I-00-00- 00010-00, Task
Order No. 01 Transparency and Accountability, 2004;

195
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Organizing SIFC provides access to light financial data that stores all financial
information on costs of current and previous years, but also stores the budgets approval
for these years, detailing inflows and outflows of funds and complete asset inventories
(equipment, land and buildings) and liabilities.
The scale and scope of a SIFC can vary from a simple approach to budget revenues,
control expenses, debt, and human resources management, financial reporting to an audit
processes.
Recording information in an integrated system that uses common values, may access
SIFC by extracting specific information needed to perform different functions and tasks.
All kinds of reports can be created: financial statements, sources and uses of funds, cost
reports, reports on investments, age of receivables and payables, cash flow forecasting,
variance budget performance reports.
Managers can use this information for a variety of purposes, to plan and formulate
budgets, to examine the results with the budgets / plans, manage cash balances, to track
the status of debts and receivables, for monitoring and utilizing fixed asset, for monitoring
the performance of specific departments and units and make revisions and adjustments as
needed. Reports can also be adjusted to meet the reporting requirements set by external
agencies and international institutions such as IMF.
Computerized financial management systems are not a new phenomenon; on the contrary,
recording financial information is the oldest known form of evidence, dating back
thousands of years. However, the financial information have experienced for a long time
problems, especially those related to inventory money. Only later in the 15th century,
Luca Pacioli, the father of accounting, from Italian origins, established the codification
expression of double-entry to develop a financial accounting system, leading in time to
modernize financial management and accounting.
All systems, including modern financial management information, that are used in the
private and public sector, based on technical innovation according to Paciolo. The system
is called double entry because each transaction involves the exchange between two
accounts: debit and credit. For each debit and credit, there is an equal opposite, and the
sum of all debts must therefore be equal to the sum of all the credits. This requirement is a
critical balance because it facilitates the detection of errors or discrepancies from the
recorded transactions. Furthermore, it offers a complete picture of the financial situation
of an organization, and double-entry bookkeeping facilitates the making of financial
reports directly from accounts.
Figure no. 1 shows the financial and accounting information system and financial and
accounting management cycle.

196
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 1. SIFC and accounting and financial management cycle


Source: own design

Figure No. 1 shows a complex set of functions of SIFC, which can be used to support the
creation of the execution budget and control review, assessment of financial performance
and results.
The amount of information created, processed and sent to the economic entity in the
current context, is so large that it can no longer conceive such activity without exploiting
IT solutions.
The engine of these changes at the company level is the Internet which is both a cause and
an effect of globalization and all that flows as a transformation of society as a whole and
the economic society in particular. It was therefore unlikely that the entity through its
specific functions, will not adapt to the new demands of society. From a number of
functions, like the marketing function, which is the most liberal, we could adapt more
easily the accounting function which we must recognize that it is perhaps the most rigid
and has a lower degree of adaptation to the new conditions because it is a strictly
regulated system, making it difficult to transition to an Internet environment.
Specifically the Internet has been a source of documentation, infrastructure and less a
communicative environment for the development of new accounting applications. In time,
when the emphasis was placed on processing and distributing Internet through the

197
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

services it uses, a model of organization was established due to protocols used initially in
local area networks, and later to organizational intranets.

3. IT SYSTEM AND FINANCIAL CONTROL.

The information system is a basis for decision-making in financial control. The system is
a complex system that can be divided into three sub-systems:
¾ sub-executive system
¾ sub- information system
¾ sub-control system.
Figure No. 2 is showing the link between the computer system and financial control.

Figure 2. The interdependence between the computer system and financial control
Source: proccessed after Budugan D., Berheci I., Georgescu I., Beţianu L., 2007

Sub- executive system is a basic means of control, such as purchasing, production, sales,
financial and others. The information of sub-system is a link between sub-executive and
management systems that have a task to provide timely information for executives and
managing sub-systems in decision making.
Sub-systems of information, in accordance with criteria for accounting roles and tasks, are
divided into accounting and non-accounting information of sub-systems1 (information
system, management accounting information system). It is important to know that these
systems are not organized into separate modules physically but are often integrated into
overlapping areas of responsibility.

1
Romney, M., Steinbart, P. J. (2009). Accounting Information System. Eleventh Edition. Pearson Prentice
Hall

198
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The advancement of information technology and communications in management is in a


better position during the decision-making process, since it is possible to improve
constantly the update of accounting information system to support decision-making
entities1.

4. INDIVIDUALITIES OF FINANCIAL CONTROL IN IT ENVIRONMENT.

In the context of technological developments, the entity is a basic link on the value chain,
which must be active in order to achieve the objectives and making a profit in various
forms of financial control. There are many factors that force entities to have an agitated
behavior for economic survival, based on financial control.
Within the entities the components of the information system are departments that form
the entity. Among the components of the information system certain relationships within
the control activity are established. Knowing these relationships determines the measure
of knowledge of the control system and act as a measure to safeguard the activity.
Information technology via the Internet opened a new economy. The term new economy
is used much more in the recent years, because the today’s economy is based on Internet.
This new economy is based on knowledge creation, the use of knowledge in the economic
field, particularly towards innovation.
In transaction there is a control activity of data that are stored in the database of the
system, which by collecting can be recovered and processed into useful information in
decision making, in business processes. The application takes the raw data held in the
system database which processes based on configured business logic, and it will switch it
to display user presentation system.
For example, take into consideration the department accounts payable to process an
invoice, for an accounting information system to operate on the basis of an invoice
provided by a supplier, the database must be stored in the system. When goods are
received from the seller, a receipt is created and then it enters in the information system.
Before closing the accounts, the department pays the seller for the merchandise and the
payments are processed automatically at system level; a voucher can be created and the
seller can be paid.

5. BENEFITS AND IMPLICATIONS OF FINANCIAL CONTROL OVER


FINANCIAL ACCOUNTING SYSTEM IN AN IT ENVIRONMENT

After the wave of corporate scandals at major companies such as Tyco International,
Enron and WorldCom, particular emphasis was placed on applying strong system of
internal controls based on transactions. Controls were used with the passage of the
Sarbanes-Oxley Act of 2002, stipulating that these companies must generate an internal
control report, to attest the person responsible for the internal control structure of the
entity, and also to present the efficiency of these controls.2 Because of the great scandals

1
Pierce, B., O’Dea, T. (2003). Management Accounting Information and the Needs of Managers – Perception
of Managers and Accountants Compared. The British Accounting Review Vol.35, No.3, pp. 257- 290.
2
http://www.coso.org;

199
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

that were embedded in the company's accounting practices, much of the focus of the
Sarbanes Oxley Act has been on accounting information systems based on computer. The
sellers of accounting information systems through their government pursue risk
management and compliance features to ensure that business processes are robust and
protected by the assets of the entity that insures them.
Table 1 presents the advantages and disadvantages of financial control in an IT
environment.
Table.1. Benefits and implications of financial control over financial accounting system in an IT
environment

Benefits and implications of financial control over financial accounting system in


an IT environment

A big advantage of financial control on accounting information systems based on


computer is that users efficiently automate financial reporting and modeling support
using advanced development.1
Reporting is an important tool for the entity in which you can see exactly how
information is used in real time, in decision-making processes and financial
reporting.
Financial accounting information system represents a centralized database, of
generating processes and transformations of the data and information that can be
easily controlled and analyzed by business analysts, managers or other
stakeholders.
These financial control systems must ensure that reports are appropriate, so that
policy makers do not act on old irrelevant information, and rather act on quick and
efficient reported results.
Consolidation is one of the hallmarks of reporting, people should not forget the
enormous numbers of transactions. At the end of the month, a financial accountant
consolidates all tickets paid by rolling a financial report. This application system
offers a report with the whole amount paid to its suppliers for that month. And in
the case of large corporations, which generate large volumes of transactional data,
running these reports based on accounting information systems may take days or
even weeks.
Source: processed after http://www.allbusiness.com/accounting/3504565-1.html;

1
http://www.allbusiness.com/accounting/3504565-1.html;

200
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

6. POSSIBILITIES FOR ORGANIZING AND IMPLEMENTING FINANCIAL


CONTROL IN AN IT ENVIRONMENT.

In order to ensure this financial comtrol we were inspired by WorldCom şi Lehman


Brothers1 Cases,which helped us to achieve the primary objective of this. We found that
the basis of financial accounting information system is built on a special register to follow
the progress of all economic transactions.
Every economic transaction enters the system registry, starting with the allocation of
funds to reach the commitments of payment of goods and services. For all transactions
there should be simultaneously a special register of compliance required by a standard
table of accounts. These records remain as a permanent piece of history of all financial
transactions, and the source from which all the reports and financial statements are
derived, and controlled.
Based on the WorldCom and Lehman Brothers model, we have established an information
system which will have accessibility to all information within the entity, thus easing the
way financial control. This model that we want to implement is based on a general
register (RG) which manages and performs integrated tracking of the following business
processes: customer relationship management, relationship management with suppliers,
management of the complete sales, complete management supply and stock, therefor the
internal or external control will be easier.
Any form of financial control in an entity shall be conducted using internal working
procedures. These procedures are specific only to granting visa for preventive financial
control and financial control management.

Financial Preventive Control

Financial control aims to identify projects of operations that do not meet the conditions of
legality and which may affect public property. We note that CFP is a preventive control of
legality, and to simplify the procedure we would require all documents recorded to be
electronic.

1
In 2002, internal auditors Cynthia Cooper of WorldCom and Eugene Morse used a computerized
accounting system in the company that discovered $ 4 billion allocated in fraudulent charges and
other serious accounting entries. The investigation led to the termination of CFO Scott Sullivan
and established a new legislation, Section 404 of the Sarbanes-Oxley Act, which regulates
companies' internal financial control and introduces procedures. In investigating the causes of
Lehman’s' collapse, a data review of all SIFC systems was done and a key element was discovered
in collecting documents which reviewed witnesses in interviews. Searching for reasons for failure
of the company led to an investigation extended to the necessary operating data review of
Lehman's through marketing, financial evaluation and other systems of accounting data. Lehman
systems provide an example of how not to structure a SIFC. At the time of bankruptcy, Lehman
has maintained a patchwork of over 2600 systems and software applications. Lehman systems were
highly interdependent, but their relations were difficult to decipher and not well documented, it
took an extraordinary effort to manage these systems and to obtain the necessary information.

201
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

CFP visa involves the following steps:


Receiving the documents to grant visas CFP.
In order to grant CFP visa for fairness and speed is necessary to operate only with
electronic documents.
Verifications are made on the basis of CFP register which must contain the
following elements: document name, department that issued the visa, content
of the document, department that submitted the visa, the value of transactions,
the date of return of the document and comments.
All documents submitted to CFP visa should be electronic for effectiveness, fairness and
even for clear evidence.
The handing over of the documents shall be made only to specific persons
entitled to the nature of this operation.
Documents submitted to electronic visa type can be emailed thus shortening the workload
and the time used.
Conceptually, the CFP conducted in an IT environment will be as follows:

Figure 3. Financial Preventive Control

202
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Management control

To perform management control in an IT environment we must go through the following


steps:
The opening meeting: the date set by the control team that will perform the
control, where, it will have a meeting in the presence of management to
establish organizational measures necessary to conduct control.
Performing the control: meetings, site visits, interviews in order to collect
information to ascertain compliance with the legislation. During the time of
control Financial Control Management Notes are revised. Deficiencies are
determined by measurements and evidential documents, not based on
information coming from outside.
Conducting meetings and approvals requires a large amount of work and time. In case the
documents are electronic, the verification can be more accurate and faster. If we created
the register RG, then the information will be accessed by password by control bodies on
both the internal control and external control. RG helps us avoid deficiencies due to
product codes, which prevent the introduction of erroneous data, thus removing the
Observation Note as well as the information received from outside the entity.
The closing meeting: agreeing between the inspection team and structure
controlled on the shortcomings and development of the financial control
management Protocol.
If there is a general register is enough to make an electronic Protocol in the closing
meeting which gives the possibility of keeping some intermediary electronic copies that
will help verify the financial control.
Tracking results of financial management is based on the control of
financial management Protocol which allows us to track the records obtained
and the deficiencies observed.
If there is a general register, than the Protocol is presented electronicly allowing other
control bodies to verify the results obtained and deficiencies observed.

7. CONCLUSIONS

Financial control is one of the tools that managers use to meet the tracking of progress
and evaluation of results. From this perspective, controls are an integral part of economic
activity undertaken, and implementation of financial control is the responsibility of every
employee in the institution, in accordance with specific processes and operations of the
business cycle. In this regard, the safeguards require the development of standards,
procedures and instructions that control the activities of the entity, establishing clearly the
performance and responsibilities of the manager and employees.
Implementation of standards for financial control information system requires
compatibility with the institutional activities, including the organizational structure
methods and procedures specific to the activity of the institution, in line with the

203
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

objectives of financial control, the control environment, risk management, surveillance


and monitoring, objectives specific to strategic management.
Managers are interested in achieving financial controls which often are preferred to
monitoring business performance and evaluation of progress of a company, as company's
financial objectives. Once the decisions of strategic management have determined how
the company will proceed, financial controls are designed to assess the company,
pursuing strategic plans, and how decisions are valid in relation to the entity's ability, to
obtain a certain level of performance.

8. BIBLIOGRAPHY

[1] Malinić S., Todorovic M., Implementation of an integrated-accounting-information


system - theoretical and methodological basis and risks. Accounting, Vol. 1-2,
Association of Accountants and Auditors of Serbia, (2011);
[2] Popa Ş., Ionescu C., Audit în medii informatizate, Editura Expert, Bucharest, 2005;
[3] Romney M., Steinbart P. J., Accounting Information System. Eleventh Edition.
Pearson Prentice Hall, (2009);
[4] Vaassen E. H. J., Accounting Information Systems – A Managerial Approach. John
Wiley & Sons Ltd., (2002);
[5] Pierce, B., O’Dea, T. (2003). Management Accounting Information and the Needs
of Managers – Perception of Managers and Accountants Compared. The British
Accounting Review Vol.35, No.3.
[6] http://www.coso.org;
[7] http://www.allbusiness.com/accounting.

204
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

FINANCIAL ADVANTAGES OF SOFTWARE PERSONALIZATION

Larisa Gavrila 1*
Sorin Ionescu 2

ABSTRACT

Software personalization is considered to be a must have for B2B (Business to business)


clients and suppliers. Clients need personalized applications in order to gain competitive
advantage on the market but also this is needed due to existent complex IT infrastructure
where the applications need to be integrated and implemented. Suppliers on the other
hand in order to keep their market share they need to offer software personalization
solutions. In order to determine the advantages of software personalization a framework
is proposed in this article then the article looks into analysing three software companies
listed on NASDAQ stock exchange. The analysis is focused on identifying the financial
advantages a software company has by being involved in software personalization
activities.

KEYWORDS: Software Personalization, Customization, Monetary Value of


Customization, Software Customization Importance, Software Personalization
Advantages.

1. INTRODUCTION

Personalizing software applications has become a necessity, especially for the business to
business (B2B) segment. Unlike a domestic user, business clients are facing an increased
complexity of the IT infrastructure in which new applications must be implemented and
integrated, so that software customization under the most simplistic form could be
translated into its adaption in the existent IT infrastructure.
Providers in order to maintain their market share must be aligned to the new requirements
and provide customization services for software applications. As accounting standards do
not include specific accounts for software personalization activities, these activities are
incorporated in different segments.
The article proposes a framework model describing the relationship between suppliers and
clients and their connections within the software personalization context. The model is
built to determine the advantages of software personalization.
In the following, it will analyze three companies active in the segment of "Computer
software", precisely in order to emphasize the forms under which software application is
1
* corresponding author, PhD Student of FAIMA , Politenica University Bucharest, lgavrila@yahoo.com
2
Professor PhD Eng, Politehnica University Bucharest

205
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

customization. Going forth the authors will analyze the amount of revenue from
customization of software applications, their evolution over time and their share in total
revenue achieved in 2015.
Moving forward three companies active in the segment of "Computer software" will be
analyzed, precisely in order to emphasize the forms under which software customization
can be identified. Going forth the authors will analyze the amount of revenue generated
by software customization activities, their evolution over time and their share in the total
revenue achieved in 2015.

2. THE CLIENT - SUPPLIER RELATIONSHIP WITHIN SOFTWARE


PERSONALIZATION SEGMENT

The Customer-Supplier relationship in the context of software personalization is


transposed in Figure 1- Customer-Supplier relationship in the context of software
personalization, the figure is represented below and it is a proposal for a framework that
describes the connections between customers and suppliers.

Figure 1: Customer-Supplier relationship in the context of software personalization

Customer needs are transposed into problems that require one or multiple solutions (4),
these solutions can be identified via a provider’s capacity (through skills and capabilities).
After defining the problem and identifying solution, we can go further in defining the
requirements themselves. These requirements can be transposed into a new product (1), in
a co-design initiative (2) or into the adaptation of an existing product or prototype (3).

206
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

The nature of the connections is given by the value added resulted from software
personalization activities and the timeframes and costs associated with these types of
activities. The success of cohesion denotes positive customer experience in relation to the
contracted supplier.

3. SOFTWARE CONFIGURATION VERSUS SOFTWARE CUSTOMIZATION

Most of the times no difference is made between configuration and customization, but the
impact of software configuration or customization is high, among impacted areas it can be
mentioned:
- How fast can users work with the new system?
- How much will they depend on IT teams in order to carry on their daily
activities?
- How easy will it be for future users to keep up with system upgraded?
When it comes to a configurable system, most often it refers to a system that can be easily
adapted to a company's operations [1]. Usually one of the expert users, without having
necessarily advanced knowledge of IT, but with a preliminary training could set up such a
system. A system configuration can take from several minutes to several hours. Such a
system comes with the so-called GUI (graphic user interface) which is a friendly interface
that users can use for system configuration [1]. Also, configuration means changing
parameters so that through GUI interface and with buttons or drop-down lists, an expert
user can make changes at the configuration level without having an impact on the source
code.
In terms of customizing applications, this involves using the customer's servers, IT
professionals with programming skills [1]. Unlike configuration, where a GUI interface is
required to complete the configuration activities, in order to personalise the software,
specific tools are needed in the design and development stage and specialized personnel to
bring changes to the source code. Also unlike configuration where an internal expert user
can complete software configuration activities, when it comes to customization, new
requirements should be sent to the IT team that will analyse the requirements based on
their priorities and will provide an estimate of the time and will issue a request for
financial resources [1].
Below, we will be able to observe some of the characteristics of configurable systems [1]
(Fig.1)

207
JOU
URNAL OF INF
FORMATION SYSTEMS
S & OP
PERATIONS MANAGEMENT
M T

•th
he cost of softtware acquisition includes
Reduced
d total cost off pu
urchasing cosst and mainten nance cost, which
w in
ow
wnership th
his case is low
w because inteernal users caan make
co
onfigurationss by themselvees

New functiionalities alwaays •neew features can be easily incorporated


i i the
in
onn hands ap
pplication as the source co
ode has not chhanged

•ass programminng is not need


ded, parameteer
Fast imp
plementation
chhanges can bee completed very
v quickly

•affter internal uusers complete their speciaalized


trraining for appplication configuration theey will
Reduceed IT support haave enough kknowledge to complete the
acctivities and tthere is no or little need forr a
sppecialized IT team
Figurre 1. Characteristics of a conffigurable system
m

A
Among the characteristics of
o customizabble systems itt can be menttioned the following: [1]
(F
Fig.2):

•th
hese applicatiions can be ussed by compaanies
It serves more
m industries
fr
from differentt industries.

•th
hey can coverr very specificc requiremennts that
Cover specific needss can lead to gaiining a compeetitive advanttage on
he market
th

•changes that are


a brought inn the application can
Customized moduless a
actually modiffy it entirely and
a in some cases
c it
can look like an
a entirely neew applicationn

Figure 2. Characteriistics of a custoomizable systeem

Iff the configurration of a system is not suufficient, nexxt step can bee meaning cusstomization
ass observe in Figure
F 3- Connfiguration annd customization [2].

2008
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 3. Configuration and customization


Source: Wei Sun; Xin Zhang; Chang Jie Guo; Pei Sun; Hui Su, “Software as a Service:
Configuration and Customization Perspectives”, Congress on Services Part II, 2008.

Moving forward the term software personalization will include both software
configuration and customization.
If the software personalization is required for internal purposes, then the main questions
that the company should respond to would be:
- In which department will the application be used? (finance, procurement,
logistics)
- Are the business processes clear and well defined with the department?
- Are the existing business processes stiff?
- Can the business processes be subject to change?
- Does the company's competitive advantage consist in the process that is about to
be incorporated or change in the application?
If the application is being customized for external customers, main questions to clarify
would be:
- Does the company provide a wide range of packages and services?
- Do the company's clients come from different geographical and cultural
environments?
- Are there more than one or two reasons why customers buy the company's
applications?
- Are the applications being used in other purposes than the initial designed ones?

4. SOFTWARE PERSONALIZATION FORMS AND SHAPES

The Ultimate Software Group Inc – is a cloud solutions provider in the human capital
(HR) segment. In the annual report of 2015 the company The Ultimate Software Group
Inc. describes their software personalization capabilities as it fallows [3]:
- Rich and Highly Configurable Functionality. UltiPro has rich functionality built
into the solution and provides extensive capabilities for configurability. As a
result, the customers can avoid extensive customizations and yet are able to
achieve a highly tailored solution to meet their specific business needs. Since
UltiPro's feature-sets are unified, their customers are able to streamline their
management of the total employment cycle and can generate strategic HR and

209
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

talent management reports from UltiPro as their primary, central system of record
for their employee data.
- Flexible, Rapid System Setup and Configuration. UltiPro has been designed to
minimize the time and effort required to set up and configure the system to
address individual company needs. Largely because the UltiPro solutions deliver
extensive functionality that can be configured to align with the customers' various
business models with few customizations, the setup of new customers is faster
and simpler than implementations typical of legacy, on-premise software.
- Rich End-User Experience, Ease of Use and Navigation. The products are
designed to be user-friendly and to simplify the complexities of managing
employees and complying with government regulations in the HR, payroll, and
talent management areas.
All software personalization related activities (customization, configuration,
implementation and integration) are mentioned under the umbrella of professional
services:
Ultimate's professional services include system setup and activation (i.e.,
implementation), executive relationship management (“ERM”), and knowledge
management (or training) services; the setup and activation consulting services are
differentiated from those of other vendors by speed, predictability and completeness. [3]
Within the profit and loss account the software personalization costs and revenues are
reflected in the services account.
MicroStrategy Inc- the leader in the segment of enterprise software platforms, this
company stands as having the capability to produce applications that can be customized
and integrated into existing business intelligence structure:
- Customizable applications and integrated into business intelligence systems: the
company offers software applications that a client can customize to a large
scale; this allows organizations to incorporate their own brand in the mobile and
web applications, as well as to integrate these applications into other corporate
systems [4].
The fact that this company offers its clients the possibility to incorporate their own brand,
the option of integration with other systems and single sign-on has led to an increase in
deployment options and enables IT groups to implement the programs throughout the
companies in a customized manner while the clients can continue to leverage their
investments in other technologies [4].
When it comes to accounting reporting, the revenues and costs generated by software
personalization activities are included in “other services” account.
Tyler Technologies Inc- is a solutions and services leading provided for information
management systems in the public sector. Within the annual report the company Tyler
Technologies concentrates its software customization activities in software services
category:

210
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

- Software services – the company provides a variety of professional IT services to


clients who utilize their software products; all of their client’s contracts include
installation, training and data conversion services in connection with their
purchase of Tyler’s software solutions [5]. The complete implementation process
for a typical system includes planning, design, data conversion, set-up and testing;
at the culmination of the implementation process, an installation team travels to
the client’s facility to ensure the smooth transfer of data to the new system;
installation fees are charged separately to clients on either a fixed-fee or hourly
charge basis, depending on the contract [5].
The company reports the revenues that are being generated by software personalization
activities in the category entitled software services.
Going through this analysis of the tree company, we could notice that within software
personalization activities are mentioned the types of activities described in Figure number
4- Software personalization activities:

configuration

data conversion
customization
& migration

SW
personalization

implementation integration

Figure 4. Software personalization activities

211
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

5. SOFTWARE PERSONALIZATION REVENUES’ EVOLUTION AND THEIR


SHARE WITHIN THE TOTAL REVENUES

To determine the income evolution from software personalization activities, it has been
taken into account the revenues declared in the annual reports.
Ultimate software and MicroStrategy have mentioned their income from the period 2015-
2013. Tyler Technologies company mentioned only revenues from the years 2014-2015.
MicroStrategy company revenues were adjusted with losses from Exchange rate
differences as it fallows: for the year 2015 it has been added to the sum of $ 7 357, for
2014 it has been added the sum of $ 1 078 and for 2013 it has been added to the amount
of $ 859.
All amounts are expressed in the same currency (US dollars) and in thousands of dollars.
All types of income reported as revenue from software customization activities contain
income from training activities in order to use custom applications.
The total revenues and the revenues generated by software personalization activities from
the three companies fall under the same value range which allows us to compare the three
companies, this value range can be noticed in figure number 5 – Graphic representation of
revenue segmentation.

Figure 5. Graphic representation of revenue segmentation

From the analysis of the evolution of income in accordance with Figure No. 6-The
evolution of income generated by software personalization activities – it can be noticed
that 2 out of the 3 companies have achieved further growth in this area:
- Ultimate Software Inc. has seen a 15% increase
- MicroStrategy Inc. recorded a decrease of 26%
- Tyler Technologies Inc. recorded an increase of 19%

212
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

Figure 6. The evolution of income generated by software personalization activities

We could draw a first conclusion that software personalization constitutes a growing


segment. To study the importance of this kind of income we need to turn our attention
towards the shares taken up by these kind of revenues in the total income. A consolidated
view is offered by table number – The shares of software personalization revenues in total
income.
Table 1. The shares of software personalization revenues in total income
2015 2014 2013
Company (% total income) (% total income) (% total income)
Ultimate software 16.5% 17.0% 18.3%
MicroStrategy 19.1% 23.4% 24.0%
Tyler Technologies 23.7% 23.1% /
We can see a decline in software personalization revenue shares over the years as follows:
- Ultimate Sofwatre Inc. recorded a share of 18.3% in 2013 dropping towards 17%
in 2014 and 16.5% in 2015 but we can notice a 15% increase of the nominal value
between 2014-2015.
- MicroStrategy Inc. recorded a share of 24% in 2013 going down in 2014 to
23.4% and in 2015 to 19.1% , the massive loss of recorded in the period 2014-
2015 can be put and on account of the decrease in income’s nominal value with a
considerable percentage (26%)
- Tyler Technologies recorded a share of 23.1% in 2014 and then this grew up to
23.7% in 2015, in conjunction with an increase in the nominal value of 19%.
In accordance with Figure 7- Maximum and minimum values of software personalization
revenue shares in total income - we can notice that the share of such income in total

213
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

income belongs to the range 16% to 24%, which foresees increased attention and
maximizing revenues from this area in the future.

Figure 7. Maximum and minimum values of software personalization revenue shares in total
income

6. CONCLUSIONS AND FUTURE DIRECTIONS

The article proposes a model that describes the client-vendor relationship put in the
context of software personalization. From the analysis of three IT companies that were
listed on NASDAQ stock exchange, a series of ideas about the importance and benefits of
software personalization activities were sketched.
Even though the initial definition of software personalization included only configuration
and customization, after the analysis of the annual reports, the definition can be extended
to other activities that involve personalization like integration, implementation, data
conversion and migration.
It was also observed that revenues generated by personalization software activities
registered an increasing trend and their values in the total revenues vary between 16.5%
and 24% which underline their significance.
With regards to future directions of study, the analyzed database should be more broadly
extended in order to obtain a significant statistic sample and to build and test a range of
hypothesis driven from the ideas expressed above. A parallel analysis focused on costs is
another future direction as this would underline how much it costs to run software
personalization activities and the significance of cost savings
When it comes to the proposed framework, the connections between suppliers and clients
will need be heavily analyzed as they have a major impact over the customer experience
with a vendor and its products and solutions.

214
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

7. REFERENCES

[1] Centric Software, „Configuration vs. Customization: Clarifying the Confusion”,


(http://www.centricsoftware.com/wpcontent/uploads/2015/03/Centric_Configure_v
s_Customize_WP_FINAL.pdf) , 2015
[2] Wei Sun; Xin Zhang; Chang Jie Guo; Pei Sun; Hui Su, “Software as a Service:
Configuration and Customization Perspectives”, Congress on Services Part II,
2008.
[3] Ultimate Software Group Inc, Raport Anual 2015,
https://www.sec.gov/Archives/edgar/data/1016125/000101612516000168/ulti-
20151231x10k.htm
[4] MicroStrategy Inc, Raport Anual 2015,
http://files.shareholder.com/downloads/MSTR/2103749483x0x887383/BB63505E-
1B63-486A-AE5D-DBB3B45E1595/MSTR_2015_Annual_Report_-_Final.pdf
[5] Tyler Technologies Inc, Raport Anual 2015,
https://www.sec.gov/Archives/edgar/data/860731/000156459016013137/tyl-
10k_20151231.htm

215
JOURNAL OF INFORMATION SYSTEMS & OPERATIONS MANAGEMENT

JOURNAL
of
Information Systems &
Operations Management

ISSN : 1843-4711
---
Romanian American University
No. 1B, Expozitiei Avenue
Bucharest, Sector 1, ROMANIA
http://JISOM.RAU.RO
office@jisom.rau.ro

216

You might also like