You are on page 1of 109

!

Stakeholders Requirements
Analysis
Deliverable D1.1

Editor

Demetris Trihinas
Reviewers

Manos Papoutsakis (FORTH)
Fenareti Lampathaki (Suite5)
Date

30 June 2017
Classification

Public



UNICORN has received funding from the European Unions Horizon 2020 research and
innovation programme under grant agreement No 731846
D1.1 Stakeholders Requirements Analysis

Contributing Author # Version History


Name Partner Description
Table of Contents (ToC), document purpose
Demetris Trihinas UCY 1 and partner contribution assignment
Background and Terminology section initial
Athanasios Tryfonos UCY 2 content merged, relation to other WPs added
Content for methodology followed to derive
Zacharias Georgiou UCY 3 requirements
Updated methodology and background
George Pallis UCY 4 section, survey first results
Initial non-functional requirements section,
Marios D. Dikaiakos UCY 5 updated methodology with industry findings
Minor improvements to terminology, refined
industry findings in methodology, initial list of
Spiros Alexakis CAS 6 system requirements and key findings from
interview process
Updated user roles, updated functional
requirements after merging comments
Julia Vuong CAS 7 received, updated methodology and
background
Updated non-functional requirements and
merged comments referring to survey key
Fenareti Lampathaki Suite5 8 findings, merged security content to
background
Updated functional requirements, added data
Sotiris Koussouris Suite5 9 privacy protection mention to survey
methodology, merged security to background,
Merged comments on user roles, conclusion
Spiros Koussouris Suite5 10 and merged comments on non-functional
requirements, conclusion
Updated introduction, merged comments on
Panagiotis Gouvas Ubitech 11 mapping of functional requirements to user
roles
Merged comments on market analysis
Giannis Ledakis Ubitech 12 scheme, executive summary and introduction
Merged comments on stakeholders analysis,
Manos Papoutsakis FORTH 13 functional requirements and figure numbering
Bernhard koelmel Steinbeis 14 Final version






2

D1.1 Stakeholders Requirements Analysis

Table of Contents
1 EXECUTIVE SUMMARY 7
2 INTRODUCTION 8
2.1 Document Purpose and Scope 10
2.2 Document Relationship with other Project Work Packages 10
2.3 Document Structure 11
3 BACKGROUND AND TERMINOLOGY 12
3.1 Programmable Infrastructure 12
3.2 Multi-Cloud Offerings 13
3.3 Micro-services 14
3.4 Containerization 15
3.5 DevOps Continuous Integration and Delivery 18
3.6 Annotation-Based Programming 20
3.7 Security Enforcement and Data Privacy Preserving 21
4 METHODOLOGY FOLLOWED TO DERIVE UNICORN SYSTEM REQUIREMENTS 24
4.1 Key Findings from industry studies 27
5 UNICORN STAKEHOLDER IDENTIFICATION 30
5.1 Stakeholders and Target Audience 30
5.2 User Roles 31
5.3 Market positioning 33
6 REQUIREMENT ANALYSIS SCHEME 47
6.1 Interviewee Profile 47
6.2 Unicorn Survey and Interview Study Key Findings 48
7 UNICORN SYSTEM REQUIREMENTS 64
7.1 Functional Requirements 64
7.2 Non-Functional Requirements 76
8 CONCLUSIONS 87
9 REFERENCES 89
10 ANNEX 95
10.1 Identified Unicorn Functional Requirements 95


3

D1.1 Stakeholders Requirements Analysis

10.2 Disseminated Questionnaire 95


4

D1.1 Stakeholders Requirements Analysis

List of Figures
Figure 1: Unicorn Vision 9
Figure 2: Deliverable Relationship with other Tasks and Work Packages 11
Figure 3: Monolithic Legacy Enterprise Architecture vs Micro-service Architecture Approach 14
Figure 4: Hypervisor vs Container-based Virtualization 16
Figure 5: Docker Relation to Linux Container Notion 16
Figure 6: CoreOS Host and Relation to Docker Containers 17
Figure 7: Unikernel Relation to VMs and Containers 18
Figure 8: Continuous Integrations, Continuous Delivery and Continuous Deployment Steps 19
Figure 9: Indicative Example of Annotation Declaration in Java 21
Figure 10: High-Level Abstract Methodology to Derive Unicorn System Requirements and Relevant Key
Technologies 24
Figure 11: Unicorn Market Positioning 34
Figure 12: Organisation Operating Business Domains as Identified by Interviewees 48
Figure 13: Number of Employees in IT department 48
Figure 14: Interviewee Role in Organisation 49
Figure 15: Usage of Annotation-based Programming Paradigm by Interviewees 49
Figure 16: Popular Programming Frameworks Used by Interviewees 50
Figure 17: Usage of Collaboration Tools Among Employees of Organisation 50
Figure 18: Popularity of CI/CD Frameworks Embraced by Surveyed Organisations 51
Figure 19: Challenges Preventing Full Adoption of CI/CD Pipeline 51
Figure 20: Cloud IDE Embracement by Interviewed Organisations 52
Figure 21: Popular reasons preventing Cloud IDE adoption from responders not using Cloud IDEs 52
Figure 22: Micro-service Architecture Adoption by Interviewed Organisations 53
Figure 23: Containerized Solution Adoption by Interviewed Organisations 54
Figure 24: Containerized Solution Adoption Challenges as Identified by Interviewed Organisations 54
Figure 25: Containerized Solutions that have been adopted by those using or considering containerization 55
Figure 26: Multi-Cloud Deployment Model Adoption by Interviewee Organisations 55
Figure 27: Popular Cloud Providers 56
Figure 28: Multi-Cloud Adoption Challenges 57
Figure 29: Monitoring Level Targets as Responded by Interviewed Organisations 57
Figure 30: Monitoring Tool Type Adoption by Interviewed Organisations 58
Figure 31: Monitoring Challenges Faced by the Interviewed Organisations 58
Figure 32: Elastic Scaling Adoption 59
Figure 33 Elastic Scaling Type 59
Figure 34: Elasticity tools used by organizations have adopted elastic scaling as part of their ALM 60
Figure 35: Elastic Scaling Adoption Challenges 60
Figure 36: Stage of Application Lifecycle at which Security is Considered by Interviewed Organisations 61
Figure 37: Security Mechanisms Adopted by Interviewed Organisations (#1) 62
Figure 38: Security Mechanisms Adopted by Interviewed Organisations (#2) 62
Figure 39: Security Mechanisms Adopted by Interviewed Organisations (#3) 63
Figure 40: Non-Technical Quality Aspects as Organised by ISO/IEC 25010:2011 77


5

D1.1 Stakeholders Requirements Analysis

List of Tables
Table 1: Industry Studies and Points of Interest Relevant to Unicorn 27
Table 2: Unicorn Actors 31
Table 3: Market Players Analysis Brief Overview 36
Table 4: Market Players Analysis DevOps Support and Highlight Features 38
Table 5: Market Players Analysis Perspectives 43
Table 6: Organisations Participated in Interview Process 47
Table 7: Functional Requirements Relation to User Role 74


6

D1.1 Stakeholders Requirements Analysis

1 Executive Summary
The main objective of the Unicorn project is to deliver a unified platform that will facilitate SMEs and Startups
to develop, deploy and manage secure-by-design and elastic-by-design cloud applications and services, that
follow the micro-service architectural paradigm, on multi-cloud programmable execution environments. The
platform will allow software developers to tackle data privacy constraints and restrictions through the
application of various privacy policies and will ease the resource monitoring process. In this respect, Deliverable
D1.1 - Stakeholders Requirements Analysis, hereafter simply referred to as D1.1, provides a clear set of
guidelines that will guide the partners through the technical activities of the Unicorn project. The guidelines that
will drive the project technical activities, are expressed in the form of functional and non-functional
requirements that will assist in shaping the final framework that fulfils the vision and objectives of the project.

The work in this deliverable begins by presenting an agreed background and terminology of innovative
technological concepts such as the programmable infrastructure, multi-cloud offerings, micro-services,
containerization, DevOps, annotation based programming and various security enforcement mechanisms. This
terminology will consistently be used throughout all future technical deliverables as these concepts form the
basic technological pillars on which the implementation of the Unicorn project will be based on.

Furthermore, the methodology that was used to derive the functional and non-functional requirements is
presented. In the beginning of this agile methodology the partners analysed industry reports, surveys and
practices in order to identify the Unicorn stakeholders and potential user roles on which the functional system
requirements will be mapped on. Based on this analysis of the industry, an interview questionnaire was designed
to identify the key technologies up taken by the SME and Startup eco-system in Europe, as well as the emerging
technologies that are within their interests but cannot be successfully integrate into their software stack yet due
to different challenges they are facing.

Lastly, the analysis of the interview responses has contributed in deciding and clarifying a set of functional and
non-functional system requirements that can be assigned to the identified user roles that are involved in
different stages of the application lifecycle.


7

D1.1 Stakeholders Requirements Analysis

2 Introduction
Cloud computing shifts IT spending to a pay-as-you-go model, where similar to utility billing, you only pay for
what you use and only when you use it [1]. Cloud computing has revolutionized the IT industry to the point where
any person, with even basic technical skills, can access and obtain, via the internet, on demand vast and scalable
computing resources at low cost [2]. For Small and Medium Enterprises (SMEs) and todays Startups, this well-
established argument is sound. Cloud computing eliminates the capital expense of buying hardware and
diminishes costs for configuring, running and maintaining on-site computing infrastructures of any size. Thus, it
is now cheaper and easier to innovate, enabling businesses to dramatically lower their cost of operations, and
by extension lower cost of starting a business independent businesses share their collective infrastructure
costs via the cloud and thus spurring entrepreneurship [3]. Therefore, it is no wonder why SMEs and Startups
are migrating core services and products of their business to the cloud. A recent study shows that, in this digital
economy, more than 37% of SMEs have embraced the cloud to run parts of their business, while projections
show that by 2020 this number will grow and reach 80% [4].

While opportunities for innovation are riper than ever, SMEs and Startups with a limited number of developers,
which ideally should be focused on core product development, are found constantly in need of tackling security,
compliance and code vulnerabilities by designing software security mechanisms to prevent data breaches and
ensure customer privacy. A recent study found that 62% of data breaches impacting SMEs accounted for a loss
of more than 50% of their customer base [4]. Hence, as data continues to migrate to the cloud, the cost of bad
security will only continue to rise. The other inhibitor that remains a consistent barrier to cloud adoption is
vendor lock-in, which is where an organization fears becoming beholden to an individual cloud vendor [5].
However, while vendor lock-in remains the second inhibitor preventing cloud adoption concerns have been
dropping recently due to interoperability initiatives to establish open APIs and libraries for cloud access and
deployment [6], [7] along with topology specifications and standards [8], [9]. A recent study by RightScale (2017)
[10], reveals that SMEs use, on average, up to 6 different clouds (including private clouds) to achieve their
business objects with the hybrid cloud establishing itself as the most popular deployment model for SMEs.
Nonetheless, while the cloud promises to automate application and infrastructure management, multi-cloud
deployments raise the complexity of monitoring, managing and effectively projecting cost budgets of their
services and core products distributed across multiple clouds with unbearable engineering required to overcome
these challenges in order to cope and not perish.

Furthermore, resource scaling (dubbed as elasticity) introduces another challenge that must be tackled as well.
Elasticity is one of the most-hyped features of cloud computing and is, from 2014, driving cloud adoption [11].
Albeit, the reality doesn't necessarily measure up to cloud providers' promises [12]. Website traffic from sudden
user demand can explode rapidly, and the need for immediate scalability to address demands comes with many
obstacles. Cloud providers offering auto-scaling (e.g., AWS), automatically provision virtual instances when
high/low user-defined thresholds are violated [13]. However, auto-scaling is challenging, especially when
determining whether an alert is issued due to a spike in demand of an application, or whether something is a
malfunction of the system [14]. A denial of service (DDoS) attack or similar issue could initially appear to be an
increase in demand, and a mechanism that automatically scales, in response, may not be a good thing. Fast
scaling could, in fact, end up being detrimental resulting in unwanted charges [15].


8

D1.1 Stakeholders Requirements Analysis


Figure 1: Unicorn Vision

Nowadays, a number of cloud application management frameworks claim to address the above challenges by
facilitating the design and deployment of cloud applications and services. Some of these frameworks are
proprietary [16] [17], locking their users to specific providers, while others are generic [18] [19] [20] allowing
management of applications on different infrastructures with adapters for popular cloud offering providers. A
common denominator in all aforementioned frameworks is that none provides the ability to manage the
lifecycle of a cloud service distributed across multiple availability zones and/or cloud sites. In turn, no framework
currently tackles data protection privacy constraints and restrictions due to national and EU directives for data
movement across application tiers, availability regions or multiple cloud sites. Also, elastic techniques are not
well supported to deal with multi-dimensional elastic properties covering resources, costs and quality [21]. Most
importantly, these tools tackle the challenges of managing cloud applications after application development.
This results often to more iterations in the application development cycle if policy definition for elasticity, security
and privacy deployment constraints for different cloud providers is not foreseen at the development phase,
delaying time-to-market and impacting negatively SMEs and Startups comprised of small development teams.

As a result, new categories of tools and solutions are needed to support challenges holding back SME growth.
Therefore, the concept of the Unicorn project is to deliver a platform that facilitates the deployment of
trustworthy applications and services creating a more entrepreneurial ICT ecosystem. Specifically, the Unicorn
platform targets, but is not limited to, SME and Startup development teams that follow agile and continuous
software delivery principles to improve software design on a continuous basis and, thus, increase productivity.

Hence, Unicorn will simplify the design, deployment and management of secure and elastic by design multi-
cloud services by providing software development teams with a cloud IDE plug-in and software design libraries
to reduce development time of cloud applications. This will enable software developers to design and develop
secure and reactive applications through their IDE, hence right where they write their code, that incorporates a
set of software code annotations, validation and packaging tools for security, privacy protection, monitoring and
elasticity policy definition at the platform, application, component and even code segment level without having
to manually perform resource mappings and bindings. To circumvent the burdensome installation and
integration process, the Unicorn platform will enable continuous orchestration and automatic optimization of
portable and dynamic cloud services running on virtual instances or micro-execution containers for increased
security, data protection privacy, and vast resource (de-)allocation. Once the software team has finished
development and are ready to deploy their application, the deployment tool of the cloud IDE plugin will bundle
application code, third-party libraries and Unicorn annotated policies and even allow users to search for required
OS libraries and runtime software stacks as the Unicorn development paradigm supports the notion of micro-
execution container environments. Specifically, containerized environments are particularly relevant to micro-
services and the developing concept of immutable infrastructure where cloud offerings served from virtual
instances are treated as disposable artefacts and can be regularly re-provisioned solely from version-controlled
code. What is more, the support from the Unicorn platform to software development teams does not stop at
application deployment. To eliminate security threats, the Unicorn platform will provide continuous risk, cost
and vulnerability assessment. In other words, by using Unicorn software teams focus on core application feature


9

D1.1 Stakeholders Requirements Analysis

development logic, not the scale, monitoring and security issues which are handled in the background by the
Unicorn platform ensuring interoperability across multiple and different clouds. This reduces software release
time and provides a powerful tool for SMEs that follow agile and continuous software delivery principles to
improve software design and continuous productivity improvement.

2.1 Document Purpose and Scope


The purpose of this document is to provide a comprehensive foundation describing the basic set of design and
implementation guidelines that will start and guide the development of the IT components comprising the
Unicorn platform. In respect to this, Deliverable D1.1 aims to identify the stakeholders of the Unicorn ecosystem
and derive clear and basic descriptions of the system requirements after analysing and prioritizing the needs of
the industry and the Unicorn Projects Stakeholders. This is achieved by designing an online survey and
performing personal interviews with carefully selected project Stakeholders within and beyond the consortium
in order to probe the ICT needs of the EU SME and Startup eco-system. Thus, requirements are meant to drive
the design and development process as they comprise the constraints that are to help the Unicorn ecosystem
and platform to best match the project vision and satisfy the identified technological challenges and market
gaps. Requirements show the functional and non-functional aspects for the Unicorn project and are an
important input to the verification and validation process, since tests and evaluation KPIs should trace back to
specific requirements. To this end, functional requirements represent the list of functional properties that need
to be implemented and finally supported within the context of the Unicorn ecosystem and platform. This
includes all behavioural aspects of the system components, as well as the tools and applications. On the other
hand, non-functional requirements will concern performance, scalability, security and privacy aspects.

2.2 Document Relationship with other Project Work Packages


With the identification of the targeted stakeholders and the documentation of the basic functional and non-
functional technical requirements, this deliverable (D1.1), will be used as an agreed upon instruction set guiding
the development of the IT components that must be delivered by the Unicorn Project. Hence, D1.1 (Stakeholders
Requirements Analysis) marks the completion of Task 1.1 Requirements Analysis and Stakeholders
Identification. Figure 2 depicts the direct and indirect relationship of the deliverable to the other Tasks and
Work Packages (WPs). The definition of system-wide requirements and the key technology findings identified
by following the roadmap (described in Chapter 4) for probing the EU SME and Startup eco-system, will drive
the documentation of the Unicorn reference architecture (D1.2). In particular, the Unicorn reference
architecture is cornerstone for the project as functional and non-functional requirements are directly mapped
to well-defined system entities, thus guiding the technical work of WP2-WP5. On the other hand, with the clear
definition of the project and the prioritization of requirements to match the needs of the use-cases (D1.2), the
work in WP6 Demonstration can begin as planned.


10

D1.1 Stakeholders Requirements Analysis


Figure 2: Deliverable Relationship with other Tasks and Work Packages

2.3 Document Structure


The remainder of this deliverable is structured as follows: Chapter 3 introduces a descriptive Background and
Terminology synopsis referring to the key concepts related to the notion of Programmable Infrastructure. This
synopsis will be used as a reference glossary throughout the Unicorn project deliverables and interactions with
project Stakeholders. Chapter 4 presents a comprehensive description of the methodology followed to derive
System Requirements for the Unicorn project by designing an online survey and performing personal interviews
with carefully selected project Stakeholders in order to probe the ICT needs of the EU SME and Startup eco-
system. In relation to this, Chapter 5 documents the identified project Stakeholders and target audience, while
it also goes one step further by describing the list of the platform User Roles. Chapter 6 introduces the
Requirements Analysis Scheme which documents the key findings derived from the disseminated online survey
and the conducted personal interviews which helped the consortium compile the list of system requirements,
introduced in Chapter 7. The list of functional and non-function requirements along with the Unicorn eco-system
user roles will be obeyed throughout future project deliverables and will serve as guidelines for the technical
work to be performed to derive the Unicorn platform. Finally, Chapter 8 concludes this deliverable.


11

D1.1 Stakeholders Requirements Analysis

3 Background and Terminology
Before proceeding with the stakeholder identification and the requirement collection and analysis process, it is
important to identify and elaborate on the key concepts driving the innovative technological axes of the Unicorn
project. The terminology determined in this section will work as a reference guide across all future Unicorn
technical deliverables.

3.1 Programmable Infrastructure


Programmable infrastructure is the IT concept of applying methods and tooling established in software
development onto the management of IT infrastructure. This includes, but is not limited to, automation, on-
demand resource (de-) provisioning, service integration and delivery, API versioning, data access, immutability
and agile development [22].

What is more, the notion of programmability can be viewed and examined from two different perspectives
[23]. In particular, from a developer perspective, programmability is the means to create the proper execution
environment independently of the underlying physical resources. Thus, there is a need of both overarching
resource abstractions at the design/development stage and convenient APIs at run-time, in order to implement
an application in an environment-agnostic way and to dynamically tailor it to the actual (and usually changing)
context. To this direction, the Programmable Infrastructure provides developers with a common and single point
of access to all resources, hiding physical issues like resource nature, faults, maintenance operations, and so on.
On the other hand, from an infrastructure offering provider perspective, programmability mostly refers to the
concerns of the provider with operation and maintenance of (usually) large pools of resources. In particular,
infrastructure providers are in need of handy tools to deal with typical management tasks like insertion,
replacement, removal, upgrade, restoration and configuration with minimal service disruption and downtimes.
To this direction, a high degree of automation is desirable, through programmatic recourse to self-* capabilities
(self-tuning, self-configuration, self-diagnosis, self-healing).

Cloud computing adheres to the notion of Programmable Infrastructure by providing users with (virtual)
resources on demand, according to their needs, and by metaphorically blurring the real physical infrastructure
(bare metal) inside an opaque cloud [24]. The kind of resources exposed by clouds depends upon the specific
service model; they are infrastructural elements like (virtual) hosts, storage space, network devices
(Infrastructure-as-a-Service model, IaaS), computing platforms including the Operating System and a running
environment (Platform-as-a-Service model, PaaS), or application software like databases, web servers, mail
servers (Software-as-a-Service model). In Unicorn, we mainly target the IaaS model, since, orchestration-wise,
it gives developers the broadest control on the cloud execution environment for their applications. However,
the Unicorn project also targets providing the appropriate tooling sets to developer teams to ease cloud
application development, security enforcement, and lifecycle management and therefore while not targeting
per se PaaS offerings, it resembles a PaaS service, or better, a DevOps-as-a-Service.

In the following, we present an overview of the key concepts related both to the Unicorn project and the notion
of Programmable Infrastructure. Although the following approaches may adhere to different architectures,
frameworks and implementations (State-of-the-Art will be thoroughly documented in D1.2), they are
interrelated and their synergy towards a fully programmable infrastructure is more and more evident in todays
platforms.


12

D1.1 Stakeholders Requirements Analysis

3.2 Multi-Cloud Offerings
To achieve their cloud goals, business leaders are increasingly choosing to work with multiple cloud offerings
and/or cloud providers [25]. A dominant factor is that leading cloud providers are constantly innovating and
introducing new technologies to better their services, so an enterprise with a multi-cloud solution can be
proactive in the market, electing to consistently employ the best services and value, from any given service
provider, at any given circumstances. A recent study by IDC [26], predicts that 86% of enterprises will require a
multi-cloud strategy to support their business goals within the next two years, while other studies (e.g.,
RightScales State of the Cloud yearly trends [10], [27]) reveal that the hybrid cloud is dominating the interests
of more than 70% of IT related organisations [28]. However, while the terms hybrid-clouds, multi-clouds or even
federated-clouds are used in studies across the industry as interchangeable terms, only when specifically
questioning interviewees (a task performed by Unicorn as documented in Chapters 4 and 6) it is revealed that
organisations often refer to different cloud deployment models when using the aforementioned terms.

Therefore, in what follows we clarify different (multi-) cloud deployment models evolving around the notion of
using more than one cloud offerings and/or cloud service providers.

MC1 Cloud Bursting: This model allows for workloads to move between private and public cloud
offerings as computing needs dynamically change [29]. Specifically, organisations benefit from the
scalability of public clouds for demanding compute operations, otherwise limited by the infrastructural
resources of the organisation, while also leveraging the security provided by their private cloud
infrastructure by not exposing, at all times, protected and sensitive data. Furthermore, organisations
can benefit by the reduced access time and latency of data exchange inside a private cloud.
MC2 One Cloud Provider Multiple Availability Zones: This model supports the use of only one cloud
provider or cloud offerings type, albeit multiple availability zones, regions and/or cloud sites are used,
to deploy organisation services on cloud offerings [30]. For instance, an organisation may select to offer
its services closer to consumers by selecting appropriate availability zones (e.g., AWS offers EU offerings
via Ireland and Frankfurt zones) or it may deploy loosely-coupled services across multiple cloud sites but
all using the same cloud offerings type (e.g., Openstack, VMware). The latter is a case highly relevant to
the health sector where health institution data (e.g., clinic patient health records), for security and
privacy reasons, are protected, and used, behind private cloud deployments but can still be accessed
after obtained authorization from other inter-connected health institutions.
MC3 Multiple Cloud Providers Heterogeneous Offerings: This model supports the ability of
organisations to route their workload to respected providers that better suit particular tasks of a
services operations (e.g., data storage, processing) [25]. For instance, an organization may conclude
that to achieve certain cost reduction benefits for its cloud computing billage, its cloud storage needs
would be best shifted to Amazon Web Services (AWS) while its data processing needs for particular
(offline) tasks (e.g., image processing) might be better serviced by utilizing Microsofts Azure machine
learning data pipeline.
MC4 Multiple Cloud Providers Homogeneous Offerings: This model allows the use of homogeneous
offerings (e.g., same or similar VM types for a deployed service) from multiple cloud providers (e.g.,
AWS, Google Compute Engine) to support continuous availability of an organizations services [31]. With
this model, organisations benefit by allowing operations to carry on, despite the event of cloud provider
downtime as cloud resource acquisition is distributed among the selected cloud service providers. In
particular, this model also allows for load to be balanced across providers, while reduced access time


13

D1.1 Stakeholders Requirements Analysis

and latency for intra-data exchange is achieved for the offerings inside the boundaries of each cloud
provider.

3.3 Micro-services
The evolvement of new software development paradigms is following the need for development of applications
that adhere to the notions of modularity, distribution, scalability, elasticity and fault-tolerance [32]. A micro-
service architectural approach is considered as the resulting set that arises from the decomposition of a single
application into smaller pieces (services) that tend to run as independent processes and have the ability to inter-
communicate usually using lightweight and stateless communication mechanisms (e.g., RESTful APIs over HTTP)
[33]. These (micro-) services are built around business capabilities and are independently deployable by fully
automated deployment machinery. For (micro-) services, there is a bare minimum of centralized management
and such services may be written in different programming languages and even use different data storage
technologies [34].


Figure 3: Monolithic Legacy Enterprise Architecture vs Micro-service Architecture Approach

To understand the logic behind a micro-service architectural approach it is useful to compare it to a monolithic
approach (Figure 3) where a single executable hosts the entire functional logic of an application, such as in the
case of a web service handling HTTP requests while responsible for executing domain logic, database access,
and HTML view population. Hence, all logic for handling web requests runs within a single process. However,
this approach features a number of disadvantages, often referred to as monolith inhibitors [35]. In particular,
feature roll-outs and software code changes are always tied together even a single change made to a small
code segment of the application, requires the entire monolith to be rebuilt and re-deployed. Over time, and as
the software stack expands, it becomes evident that a good modular structure is hard to keep, making it difficult
to track software code changes that ought to only affect one module within that module. Most importantly,
resource capacity provisioning for the software stack requires scaling the entire application rather than only the
specific services in real need of additional resources.

In contrast to monoliths, micro-services are decomposed into services organised around discrete business
capabilities. The boundaries between these units are usually comprised of functional APIs that expose the core
capabilities of each service. Large systems are then composed of many (micro-) services, whereby
communication between micro-services is a central ingredient. For instance, such is the case of amazon.com1,


1
https://www.amazon.com/


14

D1.1 Stakeholders Requirements Analysis

where the different aspects of their e-commerce platform recommendations, shopping cart, invoicing and
inventory management are split into discrete, scalable and independent (micro-) services [36]. Instead of all
being part of one enormous monolith, each business capability is a self-contained service with a well-defined
interface. The advantage of this is that separate teams are each responsible for different aspects of the service
allowing the team and software core to develop, test, handle failures and scale independently. In turn,
continuous delivery is possible as small units are easier to deploy and manage their entire lifecycle.

Finally, decentralized data management is highly evident where each service dealing with a specific function of
the business process may manage its own database, either different instances of the same database technology
or entirely different database systems, so as to optimize data storage, processing and acquisition to the
heterogeneous needs and scale of each business function. As stated by A. Cockcroft, who oversaw Netflixs
transition from a monolithic DVD-rental company to a micro-service architecture comprised of many small
teams working together to stream content to millions of users, a micro-service with correctly bounded context
is self-contained for the purposes of software development [37]. Therefore, one can understand and update the
micro-services code without knowing anything about the internals of its peers, because the micro-services and
its peers interact strictly through APIs and therefore there is no need for sharing or exposing (with security
threats lurking) data structures, database schemata, or other internal representations of objects. Thus, the
commonly understood contract between micro-services is that their APIs are stable and forward compatible.

3.4 Containerization
Resource virtualization, in general, consists of an intermediate software level on top of physical resources (bare
metal) and the operating system, providing abstractions for multiple virtual resources (e.g., compute, memory,
storage, etc.), often bundled together and denoted as virtual machines (VMs) or virtual instances. VMs can also
be seen as isolated execution contexts [38]. In particular, VMs require full guest operating systems in addition
to binaries and various libraries that are necessary for the applications to run, which translates into large isolated
files that store their entire file-system on the host machine [39], [40]. Each VM is run on top of a hypervisor,
which is a specialised software on the host operating system that is responsible for the operation of the VM and
the management of the resources needed from the host machine. Today, hypervisor-based virtualization is the
most popular method of resource virtualization and the main representatives of the specified technology can
be considered the XEN [41], VMWare [42] and KVM [43]. Although security concerns have been addressed
through isolation, security limitations still exist, mainly due to numerous vulnerabilities masked in dependencies
of the deployed applications to third-party binaries and libraries [44].

On the other hand, containerization is a virtualization method, for deploying and running distributed
applications without the need to launch entire VMs. In particular, containerization (Figure 4) allows virtual
instances to share a single host operating system and relevant binaries, dependencies and/or (virtual) drivers,
in a secure but also portable and interoperable way [45]. Application containers hold components such as files,
environmental variables, and libraries required to run the desired software. Because containers do not have the
overhead of an entire guest operating system required by VMs to operate, their size is smaller than VMs which
makes them easier to migrate, faster to boot, require less memory and as a result, it is possible to run many
more containers on the same infrastructure rather than VMs [46]. In turn, application development with the use
of containers is perfect for a micro-service approach as under this model, complex applications are split into
discrete and modular units where e.g., a database backend might run in one container while the front-end runs
in a separate one. Hence, containers reduce the complexity of managing and updating the application because


15

D1.1 Stakeholders Requirements Analysis

a problem or change related to one part of the application does not require an overhaul of the application as a
whole [47].


Figure 4: Hypervisor vs Container-based Virtualization

Since containers share the operating system kernel, the isolation provided compared to the hypervisor-based
virtualization is weaker, nevertheless it seems from the user perspective, that each container executes a single
stand-alone OS. Isolation in container-based virtualization can be achieved through kernel namespaces and
Control Groups (cgroups) [48] [49]. Namespaces, is a feature of the Linux kernel that allows different processes
to have different views on the system, while cgroups, another feature of the Linux kernel, manage and limit
resource access for process access groups through limit enforcement. In order for a containerized image to run,
it is required that a specialized software to be present on top of the operating system, the Container Engine
which utilizes the Linux kernel mechanisms (LXC) described above [50]. The most popular Container Engine is
Docker which is built based on the LXC techniques [51].


Figure 5: Docker Relation to Linux Container Notion

Docker is the leading container platform with the ability to package and run containerized applications. It
provides a complete toolset to manage the lifecycle of containers, from development phase to deployment.
Docker streamlines the development lifecycle by allowing developers to work in standardized environments


16

D1.1 Stakeholders Requirements Analysis

using local containers and allows for highly portable workloads. It is written in Go and takes advantage of several
features of the Linux kernel to deliver its functionality such as namespaces and cgroups. However, as Docker's
technology is based on LXC, containers do not run an independent version of the OS kernel. Instead, all
containers on a given host run under the same kernel, with only application resources isolated per container.
This allows for a certain degree of isolation (though not as isolated as a full VM) with a lower resource overhead
but leaving an attacking surface for exposed vulnerabilities in the central OS daemon managing co-located
containers [52]. To improve isolation by providing secure containerization, and still adhere to the linux kernel
principles, CoreOS was designed to alleviate and improve many of the flaws inherent in Docker's container
model [53]. In particular, CoreOS (Figure 6) features a read-only linux rootfs with only etc being writable. In
turn, as containers are isolated, even co-located ones, and to reach each other communication is handled over
the IP network while network configurations are exchanged over etcd.


Figure 6: CoreOS Host and Relation to Docker Containers

For the deployment and orchestration of containers, frameworks such as Docker Swarm [54], Googles
Kubernetes [55] and Fleet [56] instantiate and coordinate the interactions between containers across a cluster.
Therefore, container orchestration tools can be broadly defined as providing an enterprise-level framework for
integrating and managing containers at scale. Such tools aim to simplify container management and provide a
framework not only for defining initial container deployment but also for managing multiple containers as one
entity, for purposes of availability, scaling, and networking, while the underlying CoreOS provides strong
isolation to the above Docker execution environment. Hence the container solution stack presents itself as ideal
for micro-service architectures [32], as micro-services are indeed built in this manner: a number of thin
containers, each with a minimal set of processes, interact over well-defined (software) network interfaces. Thus,
for micro-services different containers are prepared for each of the components comprising the cloud
application which is ideal to deploy a distributed, multi-component system using the micro-services architecture,
able to scale both horizontally and vertically the different applications.

In turn, unikernels are specialized virtual machine images compiled from the modular stack of application code,
system libraries and configuration which adhere to both the principles of containerized execution environments
and programmable infrastructure [57]. Specifically, unikernels are specialized single-purpose images
disentangling applications from the underlying operating system as OS functionality is decomposed into modular
and pluggable libraries (similar to CoreOS). Developers select, from a modular stack, the minimal set of
libraries (e.g., network, block devices), which correspond to the OS constructs required for their application to


17

D1.1 Stakeholders Requirements Analysis

run. These libraries are then compiled with the applications code, to build sealed and fixed-purpose
containerized environments which run directly on the hypervisor without an intervening OS, as depicted in
Figure 7. Therefore, along with the benefits of containerization, which includes: (i) short boot times (few second
range) [58], (ii) small images sizes (few MBs) [59] [60] and (iii) fierce security [61]; unikernels exhibit strong
isolation guarantees due to hypervisor-based execution, live migration and robust SLAs [62]. These benefits are
particularly relevant to micro-services and the developing concept of immutable infrastructure where VMs are
treated as disposable artefacts and can be regularly re-provisioned solely from version-controlled code.
Modifying such VMs directly is not permitted: all changes must be made to the source code itself.


Figure 7: Unikernel Relation to VMs and Containers

3.5 DevOps Continuous Integration and Delivery


Recent surveys ([63], [64]) have shown that DevOps is rapidly growing especially in the enterprise and the
demand of people with DevOps skills is increasing. According to Amazon [65], DevOps is the combination of
cultural philosophies, practices, and tools that increases an organizations ability to deliver applications and
services at high velocity. Under the DevOps paradigm, there is no more a distinct separation between
development and operations teams. These teams can be merged into a single team, in which operations and
development engineers participate together in the entire service lifecycle, from design through the
development process to production support. Enterprises and organizations gain huge benefits [66] from
adopting DevOps practices. Such benefits include: (i) improved collaboration between the various teams
(developers and operations) of an organization; (ii) high velocity and efficiency on new deployments; (iii) reliable
application updates and infrastructure changes; (iv) improved security by using compliance policies and
configuration management techniques; and (v) rapid delivery which increases the pace of new releases by
adopting continuous integration and continuous delivery practices


18

D1.1 Stakeholders Requirements Analysis


Figure 8: Continuous Integrations, Continuous Delivery and Continuous Deployment Steps

Continuous Integration (CI) and Continuous Delivery (CD) are software development practices that automate
the software release process, from build to deploy. More specifically, CI [67] is a software development practice
where members of a team integrate their work frequently (usually daily) into a central software repository (e.g.
git, svn). Each integration is verified by an automated build (including tests) to detect integration errors as
quickly as possible, which allows teams to deliver cohesive software more rapidly. Continuous integration most
often refers to the build or integration stage of the software release process and entails both an automation
component (e.g. a CI or build service) and a cultural component (e.g. learning to integrate frequently). The key
goals of continuous integration are to find and address software bugs quicker, improve software quality, and
reduce the time required to validate and release new software updates. CD is the software development practice
in which teams are constantly producing new software releases (including new features, configuration changes,
bug fixes and experiments) in short cycles and ensure that it can be reliably released at any time [68]. With
continuous delivery, every code change is built, tested, and then pushed to a non-production testing or staging
environment. The final decision to deploy to a live production environment is triggered by the developer
whereas in continuous deployment this last step is automatic.

To further assist DevOps engineers, especially in the development phase, to collaborate under better conditions
and to better promote CI/CD practices, a new category of tools, the Cloud IDE, is on the rise over the past few
years [69]. Simply stated, a Cloud IDE is, usually, a browser-based IDE that allows real-time collaborative
software development via portable working environments (workspaces) deployed on the cloud. They allow
access from anywhere using Internet Access (or even can provide access to a local setup), with minimal
configuration needed. Cloud IDEs provide support to all major software repositories thus promoting
collaboration and CI practices. Most of the state-of-the-art Cloud IDEs working environments are usually
containerized allowing the user to customize the container images according to its needs (e.g. Eclipse CHE [70],
SAP Hana [71]). Moreover, Cloud IDEs can connect to various cloud providers, making it easier for DevOps to
deploy their applications remotely.

Finally, one of the most challenging tasks of a DevOps engineer, particularly in the cloud area, is the development
of elastic applications, able to efficiently adapt their resources according to their needs. Elasticity is defined as
the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources
in an autonomic manner, such that at each point in time the available resources match the current demand as
closely as possible [1]. It is used to avoid inadequate provision of resources and degradation of system
performance while achieving cost reduction [72], making this service fundamental for cloud performance.
Nowadays, the most cloud providers and third-party tools offer an automated way to scale resources by giving


19

D1.1 Stakeholders Requirements Analysis

the ability to the developer to define the optimal policies for his application provisioning. Horizontal scaling is
the scaling method of choice for many cloud systems since it provides a way of scaling the application to meet
its demands in an uninterruptible way. Horizontal scaling requires from the application to support a way of
cloning itself, in order to be deployed in another virtual container to support part of the demand. Although
vertical scaling seems simpler since it only requires increasing resources of the virtual container hosting the
application, in fact it is not appropriate to support applications uninterruptible operation since most of the
operating systems does not support on- the-fly changes (without rebooting) on the available resources (e.g. CPU
or memory) of a running instance. Thus, horizontal scaling is mostly preferred in cloud systems.

Auto-scaling techniques are distinguished to reactive and proactive (or predictive) [1]. Reactive techniques refer
to those methods that react to the current system and/or application state which states are decided from the
latest values of monitored variables. Proactive (or predictive) techniques attempt to scale resources in advance
of demand by predicting the latter. Reactive techniques may prove inefficient to support uninterruptible at all
times operation of the application especially when there is a sudden demand burst. This is due to the fact that
acquiring new resources and instantiating a new execution environment (virtual container) requires a non-
negligible time interval. On the other hand, proactive techniques are more promising; however, in the worst
case they may miss to predict demand and act as a reactive technique with, possible, additional costs occurring
for miss-predictions. Thus, auto-scaling is a significant challenge, as a bad performing auto-scaling technique
may lead to problems such as under-provisioning; the application does not have enough resources, over-
provisioning; the application reserves more resources than the ones really needed, and oscillation; scaling
actions are carried out too quickly, for the application to see the impact of the scaling action [31].

3.6 Annotation-Based Programming


Modern programming languages (e.g., java, C#, python) offer an extremely useful mechanism named
annotations that can be exploited for several purposes. Annotations are a form of metadata providing
information and instructions that are not part of the application itself [73]. Annotations do not directly affect
program semantics, but they do affect the way software code is treated by tools and libraries, which can in turn
affect the semantics of the running software. Annotations can be read from source files, binary files (e.g., class
files), or reflectively at run time. They provide compilers and build engines with useful information and hints
(e.g., suppress warnings), and allow code injection at compilation or deployment time for runtime processing
decisions (e.g., add loggers, provide handlers to count method accesses, etc.).

From the software engineer perspective, annotations can be practically seen as a special interface which may
be accompanied by several constraints, such as the part of the code that can be annotated or the part of the
code that will process the annotations. An indicative example in Java is presented in Figure 9, which defines an
annotation denoted as Test, that will be used to annotate Java methods. The scope (java methods) of the Test
annotation is defined via another annotation @Target(ElementType.METHOD) while the annotation
@Retention(RetentionPolicy.RUNTIME)indicates that the Test annotation (and other annotations of
the same type) will be retained by the VM so as to be parsed reflectively at run-time [74].


20

D1.1 Stakeholders Requirements Analysis


Figure 9: Indicative Example of Annotation Declaration in Java

Annotations are widely used by numerous frameworks such as the Spring Framework [75] and each framework
selects one handling technique in order to process annotations. In general, there are three strategies for
annotations handling:

Source code generation: This annotation processing option works by reading the initial source code and
generating either new source code or modifying existing code, and non-source code (e.g., config files,
documentation). The (code) generators typically rely on container or other programming conventions
and work with any retention policy. Indicative frameworks that belong to this category are the
Annotation Processing Tool (APT) [76] and XDoclet [77].
Bytecode transformation: Annotation handlers of this form parse binary and/or executable files
containing annotations and emit modified binaries and/or newly generated executables. They also
generate non-binary artifacts (e.g., config files). Bytecode transformers can run either offline (compile
time), at load-time, or dynamically at run-time. In Java, they work with class or runtime retention policy
(as shown in Figure 9). Indicative bytecode transformer examples include AspectJ [78] and Spring [75].
Runtime reflection: Annotation handlers of this form use reflection to programmatically inspect data
objects at runtime. It typically relies on the container or other programming convention and requires
runtime retention policy. The most prominent testing frameworks like JUnit [79] use runtime reflection
for processing the annotations.

3.7 Security Enforcement and Data Privacy Preserving


Data security has consistently been a major issue in information technology. In the cloud computing
environment, it becomes particularly serious because the data is located in different places and even all around
globe. The increasing number of connected devices and the huge amount of software that is being developed
on a daily basis will continue to generate and introduce new attack vectors and exploit opportunities for
malicious hackers. Data security and privacy protection are the two main factors of user's concerns about the
cloud technology. For this reason, the issue of continuous cloud and application security enforcement must be
tackled, while enabling data protection privacy mechanisms at the cloud/hypervisor layer due to the co-
existence of multiple users and services within the same hosts.

Data security is commonly referred to as the confidentiality, availability, and integrity of data. Security
enforcement mechanisms are in place to ensure data is not being used or accessed by unauthorized individuals
or parties. In addition, those mechanisms ensure that the data is accurate, reliable and available when an
authorized party needs it.

To this direction, one security enforcement mechanism that is widely used is the Intrusion Detection System
(IDS). An IDS is a software component that automates the method of monitoring events within a computer
system or network and analysing them for signs of possible violations or threats of violating computer security
policies, acceptable use policies, or standard security practices. Such systems can also attempt to stop possible


21

D1.1 Stakeholders Requirements Analysis

incidents (IDPS - Intrusion Detection and Prevention System). Information gathering, logging, detection and
prevention are among the capabilities offered by IDSs. As far as the detection capabilities is concerned, most
IDSs use a combination of signature-based detection, anomaly-based detection, and stateful protocol analysis
techniques to perform in-depth analysis of the available data.

An IDS in the hypervisor or container level is able to monitor all available network interfaces used by the
execution environment of the system. The produced logs are stored locally and feed a database. In turn, an http
server can represent those data to a web interface. IDSs require significant resources in terms of computation
capacity needed to process a packet and the amount of memory needed to store the security rule set. A way to
speed-up this inspection process is to take advantage of GPUs. Their low design cost, the highly parallel
computation and the fact that they are usually underutilized, especially in hosts used for intrusion detection
purposes, makes them suitable for use as an extra low-cost coprocessor for time-consuming problems, like
pattern matching. There have been many works trying to use GPU capabilities in order to improve the current
state of IDS and IPS systems [80][83].

Encryption is another security mechanism which is intended to protect the confidentiality of digital data stored
on computer systems or transmitted via the Internet or computer networks. Encryption is the conversion of
electronic data, often referred to as plaintext, into another form, the ciphertext, by applying an encryption
algorithm and selecting an encryption key. Encryption algorithms are divided into two main categories:

i) Symmetric
ii) Asymmetric

Symmetric-key ciphers use the same key, or secret, for encrypting and decrypting a message or file. The most
widely used symmetric-key cipher is AES [84], which was created to protect government classified information.
Symmetric-key encryption is much faster than asymmetric encryption, but the sender must exchange the key
used to encrypt the data with the recipient before he or she can decrypt it. This requirement to securely
distribute and manage large numbers of keys means most cryptographic processes use a symmetric algorithm
to efficiently encrypt data, but use an asymmetric algorithm to exchange the secret key.

On the other hand, Asymmetric cryptography, also known as public-key cryptography, uses two different but
mathematically linked keys, one public and one private. The public key can be shared with everyone, whereas
the private key must be kept secret. RSA [85] is the most widely used asymmetric algorithm, partly because both
the public and the private keys can encrypt a message; the opposite key from the one used to encrypt a message
is used to decrypt it. This attribute provides a method of assuring not only confidentiality, but also the integrity,
authenticity and non-reputability of electronic communications and data at rest through the use of digital
signatures.

Another crucial security mechanism that is used to protect against potential security threats is by performing
Risk and Vulnerability Assessments. Vulnerability assessment is the process of identifying, quantifying, and
prioritizing (or ranking) the vulnerabilities in a system. Vulnerability assessment has many things in common
with risk assessment. Assessments are typically performed according to the following steps:

i) Cataloging assets and capabilities (resources) in a system.


ii) Assigning quantifiable value (or at least rank order) and importance to those resources
iii) Identifying the vulnerabilities or potential threats to each resource


22

D1.1 Stakeholders Requirements Analysis

iv) Mitigating or eliminating the most serious vulnerabilities for the most valuable resources

Although data privacy and data security are often used as synonyms, they share more of a symbiotic type of
relationship. Data privacy is suitably defined as the appropriate use of data. Data privacy preserving mechanisms
are in place to ensure that the data should be used according to the agreed purposes. Making sure all data is
private and being used properly can be a near-impossible task that involves multiple layers of security.
Fortunately, with the right people, process and technology, data security policy through continual monitoring
and visibility into every access point can be supported.

Privacy preserving mechanisms offer a set of high level ruling, which allow all interested stakeholders to define
the type and scope of data protection constraints to prevent data access from unauthorized entities and restrict
data movement between application services, countries or geographic/legal regions (e.g., the EU), availability
regions and/or multiple cloud sites to adhere to national and/or EU data restriction directives. Such mechanisms
offer a safety net against data processing of data, which in many occasions, are processed in unknowingly
remote datacenters across borders with security breaches breaking legal act compliance due to unsecure data
movement lurking in the background.


23

D1.1 Stakeholders Requirements Analysis

4 Methodology Followed to Derive Unicorn System Requirements
Deriving system requirements is the cornerstone activity of any successful project. It plays a key role for the
successful scoping, defining, estimating and managing of a project right from the start. Successful requirements
collection is typically unique in every project and circumstances, but it also can lead to many advantages. For
instance, it can accommodate better resource management, system analysis, design, improved quality in the
product delivered, and minimize the risk for delays and overruns. The methodology selected and used for the
Unicorn project is an agile methodology, which in principle is iterative while some of the basic principles it relies
on promote understanding between the business, technical and scientific needs of a project by laying out clear
expectations at the beginning and at each milestone (software release) achieved by the project [86]. The agile
methodology builds on increased communication, throughout the project and it fairly delivers the requirements
earlier than traditional, waterfall approaches for software development.

The requirements are iteratively improved at each new milestone and are kept up-to-date in the backlog to
influence in parallel several of the activities in the project (e.g., development, testing, new technology up taking).
The aim is to bring together the technical and research partners of the Unicorn project, and make them aware
from the start of the important business aspects identified by its respected stakeholders. The methodology
promotes understanding of the partners different views, consolidates opinions and defines what Unicorn
should do. This enables collection and elicitation of concrete high-level requirements, promoting
communication, alignment, consensus and active business user and customer involvement to meet the goals
and needs of the project.

In the following paragraphs a description of the agile and task driven methodology followed by the Unicorn
consortium is provided. This methodology aims to identify key stakeholders for the project, derive the Unicorn
system requirements and stir the partners to the technologies dominating the interests of its stakeholders so as
to guide the technical work that will follow after designing the Unicorn reference architecture (D1.2). Figure 10
depicts a high-level and abstract overview of the methodology process.


Figure 10: High-Level Abstract Methodology to Derive Unicorn System Requirements and Relevant Key Technologies

The first task of the methodology followed involved identifying and clearly defining the stakeholders and target
audience of the Unicorn platform while also providing an updated market positioning of the Unicorn eco-system
towards the continuously evolving cloud market. A comprehensive description of this task is found in Chapter 5.
Important outcomes of this task for the requirements collection process, is a concise description of the targeted
stakeholders, deriving a glossary of key technology terms that are understandable by Unicorn stakeholders and


24

D1.1 Stakeholders Requirements Analysis

defining a comprehensive list of user roles for the Unicorn platform. The stakeholders are the ones the Unicorn
product will be developed for and will be used by their employees and management staff, therefore, a common
terminology/glossary of the key technologies comprising the Unicorn platform was defined and agreed upon by
all partners and is provided in Chapter 3. This terminology will be used as a reference guide across all future
deliverables and interaction with Unicorn stakeholders.

The next task involved trawling the ICT industry research and technology leaders websites for global market
and technology reports (e.g., Gartner, IDC), best practices from ICT visionaries, and the bibliography for key
technologies (e.g., cloud platforms, container solutions) and requirements (e.g., cloud credential management),
relevant to the Unicorn identified stakeholders and target audience. This process is meant to act as a starting
point for the market requirements collection, but not as a comprehensive list of detailed technologies and
requirements particularly relevant to the Unicorn project. In addition, it was considered vital to validate this
initial list of collected requirements in collaboration with the industrial partners and practitioners in order to
increase the likelihood of the widespread industry adoption of the results produced by the Unicorn project. A
summary of key findings and points of interest from the ICT industry reports relevant to the Unicorn project are
listed in Section 4.1 that follows.

To this end, an online questionnaire and interview process was developed to probe the EU ICT industry to
provide, validate and prioritize fine-grained system functional and non-functional requirements relevant to the
Unicorn platform (note: All questions comprising the questionnaire can be found in Annex I). This is important
as in several cloud reports (e.g., Gartners Magic Quadrant, Rightscales State of the Cloud report) there are
statements such as elastic scaling and performance monitoring are driving cloud adoption, however, at the
same time, elasticity and monitoring are also considered major challenges across businesses of all types
without highlighting what the elasticity and monitoring key market features are and what the challenges still
in need to be addressed are. In turn, while security is often stated as something companies highly take into
consideration, often offering high standards and guarantees to their customers, security and data privacy
protection are also top on the list for cloud challenges. At this point, one is left wondering, which enforcement
mechanisms are applied for security and data privacy protection and which are still considered as challenges.
On a different level, as introduced in Chapter 2, while the terms hybrid-clouds, multi-clouds or even federated-
clouds are used in studies across the industry as interchangeable terms, only when specifically questioning
stakeholders (a task performed by Unicorn) it is revealed that organisations often refer to different cloud
deployment models when using the aforementioned terms.

Therefore, the interview process was designed to study statements and clarify generalizations such as the ones
mentioned above. The interview process is also beneficial for identifying the key technologies up taken by the
SME and Startup eco-system in Europe, as well as the emerging technologies that are within their interests but
cannot be successfully integrated into their software stack yet due to different challenges they are facing.
Specifically, the interview process targeted obtaining insights to more than just key technology concepts
dominating the interests of the Unicorn stakeholders. For instance, containerization is something that is seen to
be of interest for stakeholders. However, are there common go-to solutions for the stakeholders or are there
any mixtures of solutions utilized? These questions are of interest for the project and will help shape the Unicorn
reference architecture and business model that will be documented in D1.2 and D6.1 respectively. In particular,
the interview process was held after the online questionnaire was completed and was refined each time to best
adapt to the interviewee profile based on the given answers to obtain greater and deeper insights from the
interviewees. The interviewees were carefully selected by the consortium to span across different industry


25

D1.1 Stakeholders Requirements Analysis

domains relevant to Unicorn and included: (i) 4 Startups from the CINCUBATOR Startup Hub; (ii) 2 SME members
from the CyberForum digital alliance; (iii) the 4 Unicorn pilots servicing as platform demonstrators; and (iv) 10
interviewees from EU-based organisations of various size (large enterprises, SMEs, Startups) not affiliated
directly or indirectly with the Unicorn project. A comprehensive description of the questionnaire, the interview
process and the key findings derived from this process, can be found in Chapter 6.

At this point, it is important to mention that all interviewees were explicitly notified that the information given
by the interviewee in the duration of the interview process will be kept confidential, the interviewees personal
details will not be revealed, and the processing of all answers will be conducted in an anonymous manner, in
compliance with European Union's data privacy laws, solely for the purpose of deriving the technical
requirements for the Unicorn project. For these reasons, individual interviewee answers will not be revealed in
this Deliverable.

Having obtained all completed questionnaires and interviews, the next two tasks involved cross-examining,
correlating, analysing and elaborating on the results in order to map the obtained key findings to a list of system
functional and non-functional requirements (Chapter 7). In addition, this procedure helped us to better
understand the goals and expectations of the users and stakeholders in a market like the one that Unicorn
wishes to target. This process has greatly contributed to the project as it allows us to have a more concise picture
of the key technologies to uptake (e.g., which cloud platforms and containerized solutions are used by our
stakeholders) in the span of the project and derive the Unicorn reference architecture in D1.2. Based on the deep
insights obtained from the interviews, we managed to define a set of user- and system-perspective technical
requirements that pave the way for the design and development of the Unicorn platform. Furthermore, we also
provide a description of every role that we will consider throughout the project and how each role is connected
with the functional requirements of the project. Prioritizing the obtained requirements was required in order
for the long list of requirements driven by the industry to reflect the particular needs emerging from the Unicorn
demonstrator use-cases. We note that in order to reduce repetition, the requirement prioritization based on
the demonstrators and the key technologies targeted by the project will be introduced in D1.2 where each
demonstrator and technology will be described and justified in detail, referring to the use-cases relevant and
the expected KPIs which will be achieved by utilizing the Unicorn platform.


26

D1.1 Stakeholders Requirements Analysis

4.1 Key Findings from industry studies

Table 1: Industry Studies and Points of Interest Relevant to Unicorn

Study or Report Points of Interest and Key Findings

RightScale 2016 State of the Hybrid-cloud adoption is dominating ICT industry interests (71% - up
Cloud Report [87] from 58% in 2015)
Challenges for adopting hybrid-cloud deployment model include lack
1060 respondents of resources/expertise and managing multi-cloud offerings
DevOps growth and specifically container solution adoption is on the
34% Developers rise. Particularly, Docker is mentioned which is highly adopted by
55% IT Operations enterprises (Docker market share more than doubled compared to
2015)
61% US, 19% EU Greatest interest in containerized solutions is seen in European tech
companies

RightScale 2017 State of the Hybrid-cloud adoption numbers are even stronger in 2017 (78%)
Cloud Report [27] Cloud computing top challenges for adopters now include (other
than security and multi-cloud deployments): managing costs,
1002 respondents monitoring and governance, improving performance and compliance
61% US, 20% EU Challenges for adopting containerized solutions include: lack of
experience, security, maturity, monitoring and resource
orchestration

Gartner 2016: Magic Quadrant Study reports notable cloud providing solutions including market
IaaS Cloud Solutions [88] leaders, visionaries, challengers and niche players.
Distinction of recommended cloud service providers per business
Gartner 2016: Magic Quadrant related operation
PaaS Cloud Solutions and Vendor strengths and challenges where, even for AWS (the only
Containerized Environments notable for its auto-scaling solution), elastic scaling features severe
[89] challenges and growth potential that can drive to-and-away
businesses to specific cloud offering providers
The IaaS cloud market has clear leaders, however, the PaaS and
container markets are considered battlefields although Docker
seems to be obtaining a clear advantage in the container solution
field


27

D1.1 Stakeholders Requirements Analysis

Veracode 2016: Secure Sensitive data exposure is the prime concern for all companies
Development Survey [90] Security and data privacy protection challenges for cloud
applications developed by large enterprises, SMEs and Startups
351 respondents Most organizations want (but not always able) to incorporate
security earlier in the software lifecycle (requirement, development
230 US, 121 EU phase) rather after development or testing phase
Report highlights that DevOps is providing more opportunities to
integrate security and data privacy protection mentioning security
methods enforced by SMEs and Startups including dynamic testing,
web firewalls and runtime application protection in production.
Most significant challenge: runtime software vulnerability and
system malware detection

VisionMobile 2017: State of the Amazon is the leader public cloud provider, regardless of the target
developer nation [91] audience and company size, followed by Azure cloud for private
cloud deployments
21,200+ Developers SMEs use public cloud providers more than large enterprises
Highlights the popular programming languages and frameworks used
in different business domains (machine learning, AR/VR, front-end
development, backend development, etc.)

LightBend 2016: Cloud, Micro-services are adopted by 55% of respondent DevOps teams
Container & Micro-services [92] DevOps teams are embracing micro-services because of increased
security, improved resource management and (elastic) scaling
2151 JVM developers around Micro-service laggards are large enterprises
the globe Tools needed to ease micro-service delivery include API
management, service orchestration, monitoring, and continuous
delivery
Portability is considered by DevOps a huge barrier to overcome
when building cloud apps

DZone 2017: "DevOps: 1 out of 4 SMEs have dedicated DevOps team in contrast to the large
Continuous Delivery and enterprises with a 1 out of 2 ratio
Automation" 67% of DevOps teams using micro-services somehow compared to
27% in previous year
497 respondents 51% of DevOps teams use containerized solutions compared to 25%
in previous year
30% US, 45% EU, 25% Other Preventing DevOps teams from adopting a continuous delivery
pipeline are considered: lack of experience, unified environment
GitLab: 2016 Global Developer tools for management and monitoring
Report [93] Developers use git for source control on a daily basis (92%) while
continuous integration is adopted, at some level, by 77% of
362 Startup and Enterprise CTOs questioned organisations and application monitoring is considered
as very important by 67%


28

D1.1 Stakeholders Requirements Analysis

RebelLabs: 2016 Development The Eclipse IDE is the most popular IDE among developers for over 5
and Productivity Report and years now and is used exclusively by 48% of questioned developers,
Java Landscape [94] with the percentage growing to 55% when used with other IDEs
(IntelliJ IDEA, NetBeans, Spring Tool Suite)
2040 respondents There is a shift among developers from desktop IDEs to cloud IDEs
with the most notable cloud IDEs being Eclipse Che, SAP Hana and
RebelLabs: 2017 Programming Cloud9
the Web Report [95] Micro-service adoption is particularly high for small businesses while
large enterprises are more hesitant
2000 Respondents 68% of micro-service adopters claim that micro-services make
developers job easier
StackOverflow: 2016 Developer Report denotes the most popular programming languages per
Report [96] business operation domain
Annotation programming paradigm is dominating interests of java
56003 developers and python developers particularly due to the popularity of Spring
and Django frameworks which provide data abstractions
StackOverflow: 2017 Developer RebelLabs 2017 is the only report denoting the go-to frameworks for
Report [97] micro-service development in java (Spring, Play)

64000 developers


29

D1.1 Stakeholders Requirements Analysis

5 Unicorn Stakeholder Identification

5.1 Stakeholders and Target Audience


Small and medium enterprises (SME) play a very important role in European economy. Statistics show that at
present, SMEs (including start-ups) amount to 99% of the organisations, provide 60% of the total production
value and about 40% of the profit [98]. Moreover, SMEs offer 75% of the jobs. SME contributions to the
innovation system include not only R&D based new products and services, but also improved designs and
processes and the adoption of new technologies.

But at the same time, the process of supporting of European SMEs lags behind due to market and economic
factors, such as intense market competition, demand atrophy, resource costs, high taxes and low investment.

Strategies to enhance the competitiveness of innovative ICT SMEs should take into account that:

New information and communication technologies facilitate global reach and help reduce the
disadvantage of scale economies which small firms face in all aspects of business.
Flexible specialisation has proven to be a particularly successful model of industrial organisation:
through close co-operation with other firms SMEs can take advantage of knowledge externalities and
rapidly respond to market changes.
Usage of cloud development environments lowers the need for administration skills and frees the
company to concentrate on their core business. While todays installations are often local, it is only a
matter of time before development environments are migrated to Cloud platforms.
Cloud provides a perfect relationship between user demand and price it is elastic. Fees increase
incrementally as users use more functionalities.

At the same time, current cloud environments have significant weaknesses and therefore increase the critical
view on cloud transition. Main barriers for cloud development are outlined as follows:

Complex and costly development process: Developing new SaaS solutions or redeveloping existing
solutions for the cloud on existing PaaS is a complex and very costly project making it often prohibitive
especially for SMEs.
High dependency on cloud infrastructure provider: The fear of a so called vendor lock-in is one of the
major barriers to cloud service adoption. Customers cannot easily move to a competitors service.
Security Concerns: Deploying confidential information and critical IT resources in the cloud raises
concerns about vulnerability to attack, especially because of the anonymous, multi-tenant nature of
cloud computing.
Data Privacy: Regulation of data privacy presents the additional threat of significant legal and financial
consequences if data confidentiality is breached, or if cloud providers inadvertently move regulated data
across national or European borders. A CSO Online survey [99] found that the top five security or privacy
related concerns for cloud were all related to ubiquitous data access, regulatory compliance and
managing access to the data and the applications.

Unicorns scope lies within the core of strengthening innovation capacity, and developing innovations that meet
the needs of European ICT SMEs and start-ups. The project aspires to bring together all stakeholders involved in
the value chain of developing Cloud software services, and, actively involve external SMEs and startups through


30

D1.1 Stakeholders Requirements Analysis

validation subcontracts. The project aims in delivering a set of innovative concepts, tools and services, for
making the European ICT and software engineering SMEs more competitive, increasing their scientific and
technological potential.

Unicorn specific target audience comprises IT service providers, who, according to Digital SME Alliance, count
over 750,000 SMEs in Europe. These SMEs are eager in increasing their market share of the huge Cloud
Computing market, worth over $131 billion, as North America takes home more than half of the global revenues.

We are targeting the following three audience categories:

Small and medium sized Independent Software Vendors (ISVs): who currently offer on premise business
applications but, in the future, want to offer these as a service.
Startups: who intend to deploy own, new services, with a need for developing and deploying secure and
elastic applications.
SMEs already offering SaaS solutions: Unicorn features will allow them to concentrate on core
functionality and re-use particular knowledge, instead of spending efforts for scaling, monitoring and
security issues.

Concluding, Unicorn will contribute to all three EU Digital Single Market (DSM) pillars, namely to the Access
pillar by lowering the barrier for SMEs to develop advance cloud services, to the Environment pillar by
supporting the creation of a trusted cloud environment for European SMEs and finally to the Economy &
Society pillar by offering a solution that will improve interoperability, will contribute to standards and will allow
ICT SMEs to concentrate on their core competencies and grow.

5.2 User Roles


Table 2 introduces the identified user roles for the Unicorn eco-system. From this table, we observe that the
Unicorn eco-system involves many roles with diverse responsibilities. Some of these responsibilities may overlap
among users of the platform which, at first, may seem to lead to misleading interpretation of user role duties.
However, as we will see in the next Chapter, in DevOps teams, the silver lining between roles in the development
team are quite blur with team members often up taking responsibilities spread across different user roles (e.g.,
a Cloud Application Developer may also be in charge of Testing or the Application Administrator may also be a
Developer as well).

In the following Table, the Actor terminology and descriptions are designed to clarify and summarize each actors
roles.

Table 2: Unicorn Actors

Actor Description

Cloud Application The person providing the vision for the application as a project, gathering and prioritizing
Owner user requirements and overseeing the business aspects of deployed applications (e.g.
business delivery, functioning and services of the application) in accordance with various
criteria (e.g. cost minimization and policy definition like legal constraints)


31

D1.1 Stakeholders Requirements Analysis

DevOps Team Development, operation and testing of cloud applications, including the roles: Cloud
Application Product Manager, Cloud Application Developer, Cloud Application
Administrator and Cloud Application Tester.

Cloud Application The person defining the cloud application architecture and implementation plan based
Product Manager on the Cloud Application Owners requirements. This person is also responsible for
packaging the cloud application and enriching the deployment assembly with runtime
enforcement policies for the placeholders defined via code annotations by the Cloud
Application Developer.

Cloud Application The person that develops a cloud application by using the Unicorn-compliant code
Developer annotation libraries in order to run on a Unicorn-compliant (multi-) cloud execution
environment.

Cloud Application The person responsible for deploying and managing the lifecycle of developed and
Administrator Unicorn-compliant cloud applications. This person ensures the application runs reliably
and efficiently while respecting the defined business or other incentives in the form of
policies and constraints.

Cloud Application The person responsible for the quality assurance and testing of a Cloud Application. The
Tester Cloud Application Tester performs deployment assembly validation (at business and
technical level).

Cloud Application The person using the deployed Unicorn-compliant cloud application.
End User

Unicorn The person responsible for managing and maintaining the Unicorn ecosystem, which
Administrator includes infrastructure, various software and architectural components e.g. Core Context
Model, code annotation libraries and Enablers interpreting and enforcing given policies
and constraints.

Unicorn Developer The person that creates Unicorn related (software) components for compliant Cloud
Providers and/or DevOps Engineers such as e.g. Monitoring Probes, code annotation
libraries, services utilizing the Unicorn API

Cloud Provider Organization or service provider that provides cloud offerings in the form of
programmable infrastructure according to a service-level agreement. The Cloud Provider
is also responsible to operate the Cloud Execution Environments that will host entirely
or partially Unicorn-compliant Cloud Applications.


32

D1.1 Stakeholders Requirements Analysis

Finally, we note that, as it can be observed in Chapter 7, some of the Actors presented in the previous table may
not be assigned to any functional requirements (e.g., Cloud Application End User), however their existence
contributes into having a more complete description of the overall system.

5.3 Market positioning


Over the past years, the worldwide cloud market has evolved and is expected to enter a period of stabilisation
with projections of growth of 18% in 2017 to total $246.8 billion, up from $209.2 billion in 2016, according to
Gartner [100]. The highest growth will come from cloud system infrastructure services (IaaS), which is projected
to grow 36.8% in 2017 to reach $34.6 billion, even if the IaaS cloud market has clear leaders in AWS and Microsoft
as suggested by the Gartners magic quadrant for Cloud Infrastructure as a Service worldwide in 2016 [101] .
The Cloud Application Infrastructure Services (PaaS) are also expected to increase from $8,851 million in 2017
to $14,798 million by 2020 while Cloud Management and Security Services follow a similar growth rate, from
$8,768 million to $14,004 million, respectively [102]. According to KPMG, Platform-as-a-Service (PaaS) adoption
is predicted to be the fastest-growing sector of cloud platforms, growing from 32% in 2017 to 56% adoption in
2020 [103]. The application container segment also reached a robust $762 million in 2016 and is forecast to
grow at a 40% compound rate over the next four years to $2.7 billion [104], suggesting an impressive adoption
growth for a technology that was only recently brought to the market.

In parallel, DevOps is a leading software engineering trend, representing the shift from traditional phased, large-
scale delivery models to an agile, continuous continuous delivery mind-set, enabled by better integrating
development and operations teams within IT and employing more automated processes. The DevOps and Micro-
service eco-system market is broadly expected to grow globally at a robust CAGR 16% between 2017 and 2022,
reaching $10 billion by 2021 [105]. In practice, though, coding and deploying reliable, loosely coupled,
production-grade applications based on micro-services remains challenging and even frustrating for software
teams who need to account for service discovery, load balancing, fault tolerance, end-to-end monitoring,
dynamic routing for feature experimentation, compliance and security.

Today, a number of industrial players have hit the market with cloud developer solutions regarding Containers,
Unikernels and Micro-services (or DevOps in a broader sense) as depicted in the following figure.


33

D1.1 Stakeholders Requirements Analysis


Figure 11: Unicorn Market Positioning

In brief, from the containers technology perspective, the open source Docker is practically leading the market
and is often characterized as an almost de facto container standard (also evident in our interview process
results) that has gained most public traction due to its simplicity and flexibility in allowing developers to wrap
their software in a container that provides a completely predictable runtime environment. Other examples for
container technologies are: CoreOS rkt (Rocket) or Cloud Foundrys Garden / Warden. A recent survey
conducted by Cloud Foundry [106] though listed significant container challenges like container management,
monitoring and persistence storage that may hinder further market penetration while container persistence is
in fact acknowledged as a barrier in advancing to stateful containers that are appropriate for production
environments.

From the unikernel perspective, although the concept is quite old (since 1980s), a number of ecosystem projects
supporting the development and use of unikernels have emerged in the cloud computing age allowing for the
creation of minimal, bespoke unikernel operating systems in many different ways for many different applications
on many different hardware platforms. Some systems (like Rumprun) are language-agnostic, and provide a
platform for any application code based on the requests it makes of the operating system while others (like
MirageOS and HaLVM) leverage high-level languages and a runtime to provide an API for operating system


34

D1.1 Stakeholders Requirements Analysis

functionality. OSv and the Xen hypervisor have gained significant attention yet they also impose certain
limitations to applications aspiring for a unikernel compilation (e.g. no multiple processes on a single machine,
work as single user, need for provision for internal diagnostics when it comes to debugging). Overall, the
unikernel market remains in a rather embryotic status with most solutions still undergoing their experimental
phases while it is expected to be affected by the future evolution of containers (e.g. Docker's acquisition of
Unikernel Systems).

With regard to micro-services, although the discussion about micro-services architectures started in 2014, the
actual widespread implementation was initiated by Netflix which open sourced plenty of frameworks for
implementing micro-services [107]. In fact, the rise of containers and the broader acceptance of web protocols,
such as HTTP, JSON and REST, has resulted in bringing back service orientation to contemporary application
development and is driving the micro-services momentum. In May 2017, two significant industry-driven
initiatives on the micro-services and DevOps world were announced: Istio, an open technology by Google, IBM
and Lyft to streamline the management and security of micro-services through an integrated service mesh, and
OpenShift.io, a free, online development environment by Red Hat optimized for creating cloud-native,
container-based applications and automating the entire application pipeline enabling companies to become
more DevOps driven and agile. In this context, it needs to be noted that the role of orchestrators, as well as of
continuous integration / continuous delivery solutions, is also instrumental for effective micro-services
management and deployment. Kubernetes, an open-source platform for automating deployment, scaling, and
operations of application containers across clusters of hosts, providing a container-centric infrastructure, is
acknowledged as a leader in container orchestration and management, followed by other platforms such as
Docker Datacenter, Apache Mesos, and Cloud Foundry, that also run and orchestrate micro-services.

In more detail, in the following tables, 9 developer platforms (namely Docker, IncludeOS, Istio, linkerd,
MirageOS, OpenShift.io, OSv, Rumprun, Rkt) have been selected, taking into account their relevance to Unicorn
and the degree to which their features represent their category, and have been further analysed. Note: the
information provided in the tables is based on the official documentation provided in each platforms website
and GitHub at the time period when this deliverable was written (May 2017).


35

D1.1 Stakeholders Requirements Analysis

Table 3: Market Players Analysis Brief Overview

Platform Category Short Description Supported Languages Supported Platforms

Docker [108] Containers Docker is a container platform, packaging an application and its All Ubuntu, Debian, Red Hat
dependencies in a virtual container in order to enable flexibility Enterprise Linux, CentOS, Fedora,
and portability on where the application can run, to build agile Oracle Linux, SUSE Linux
software delivery pipelines (allowing for shipping new features Enterprise Server, Microsoft
faster and more securely) and to manage apps side-by-side in Windows Server 2016, Microsoft
isolated containers to get better compute density. Windows 10, macOS, Microsoft
Azure, Amazon Web Services
IncludeOS [109] Unikernels IncludeOS is an includable, minimal unikernel operating system C++ Linux, Microsoft Windows and
for C++ services running in the cloud, providing a bootloader, Apple OS X
standard libraries and the build- and deployment system on which
to run services.
Istio [110] DevOps Istio is an open platform to connect, manage, and secure All for app development Platform-independent but service
Microservices microservices, providing an easy way to create a network of deployment only on Kubernetes
deployed services with load balancing, service-to-service (v1.5 or greater) at the moment -
authentication, and monitoring, without requiring any changes in other environments will be
service code. supported in future versions.
Linkerd [111] DevOps Linkerd is a transparent proxy that adds service discovery, All All
Microservices routing, failure handling, and visibility to modern software
applications.
MirageOS Unikernels MirageOS is a library operating system that constructs unikernels Base unikernel language: x86_64 or armel Linux host to
for secure, high-performance network applications across a OCaml compile Xen kernel.
variety of cloud computing and mobile platforms. FreeBSD, OpenBSD or MacOS X for
the userlevel version.
OpenShift.io [112] DevOps - OpenShift.io is a Kubernetes-based container management All Linux
Microservices platform that provides developers with the tools they need to
build cloud-native, container-based apps, including team
collaboration services, agile planning, developer workspace
management, an IDE for coding and testing, as well as monitoring
and continuous integration and delivery services.


36

D1.1 Stakeholders Requirements Analysis

Platform Category Short Description Supported Languages Supported Platforms

OSv Unikernels OSv is a new open-source operating system for virtual-machines JVM languages (Java, Built on 64-bit x86 Linux
from Cloudius Systems. OSv was designed from the ground up to JRuby, Scala, Groovy, distribution
execute a single application on top of a hypervisor, resulting in Clojure, JavaScript), Ruby
superior performance and effortless management.
Rumprun [113] Unikernels Rumprun is a production-ready unikernel that uses the drivers C, C++, Erlang, Go, Java, hw/x86+x64 and Xen/x86+x64
offered by rump kernels, adds a libc and an application Javascript (node.js),
environment on top, and provides a toolchain with which to build Python, Ruby and Rust.
existing POSIX-y applications as Rumprun unikernels.
Rkt[114] Containers Core OS rkt is CLI for running application containers on Linux, All for app development - Linux
designed to be secure, composable, and standards-based. Command line
environment for
container construction
(no custom DSL)


37

D1.1 Stakeholders Requirements Analysis

Table 4: Market Players Analysis DevOps Support and Highlight Features

Development Continuous Orchestration,


Scalability & Elasticity
Platform Continuous, Integration Deployment & Management & Security Add-ons
Control
and Testing Packaging Monitoring
Docker Complete developer Deploy in Docker Cloud, Docker Compose for Secure by default: Docker Swarm: manual Docker Store
toolkit for creating AWS, Azure, Digital orchestration also Mutual TLS, certificate scaling and built-in distributing free and
containerized apps Ocean, Packet, running Kubernetes, rotation, image signing swarm clustering. paid images from
(build, test and run SoftLayer. Mesos, Amazon ECS, and container isolation Software defined various publishers.
multi-container apps). Universal packaging, Google Container networking connects A number of Docker
Docker Compose for portability to any Engine. containers together, certified plugins.
development, testing, machine running Docker Machine for intelligently routes and
and staging Docker. provisioning and load balances traffic.
environments, as well as managing your
CI workflows. Dockerized hosts.
IncludeOS Not addressed. KVM, VirtualBox and Not addressed. Increased security by Not supported. -
VMWare support with default in unikernels.
full virtualization, using
x86 hardware
virtualization - Run on
any x86 hardware
platform.
Istio Conversion of disparate Deployment of Policy changes are made Traffic encryption, A pluggable policy layer -
microservices into an microservices without by configuring the service-to-service and configuration API
integrated service worrying about service mesh. authentication and supporting access
mesh. discovery. Extended version of the strong identity controls, rate limits and
Dynamic request Provision for canary Envoy proxy to mediate assertions between quotas.
routing for A/B testing. deployments. all inbound and services in a cluster
Fine-grained control of outbound traffic for all based on policies.
traffic behaviour with services in the service Vulnerability checks of a
rich routing rules, fault mesh. Automatic zone- network and detection
tolerance, and fault aware load balancing of unusual patterns
injection. and failover for (caused by malware and
HTTP/1.1, HTTP/2, bots).
gRPC, and TCP traffic.


38

D1.1 Stakeholders Requirements Analysis

Development Continuous Orchestration,
Scalability & Elasticity
Platform Continuous, Integration Deployment & Management & Security Add-ons
Control
and Testing Packaging Monitoring
Mixer for enforcing Key and certificate
access control and distribution in Istio Auth
usage policies across is based on Kubernetes
the service mesh and secrets.
collecting telemetry No support for
data from the Envoy authorization at the
proxy and other moment.
services.
Fleet-wide Visibility:
Automatic metrics, logs
and traces for all traffic
within a cluster,
including cluster ingress
and egress.
linkerd Not applicable. linkerd runs as a A consistent, uniform Not applicable. Handles tens of -
separate standalone layer of instrumentation thousands of requests
proxy: Applications and control across per second per instance
typically use linkerd by services: linkerd applies with minimal latency
running instances in routing rules, overhead. Scales
known locations, and communicates with horizontally with ease.
proxying calls through existing service
these instancesi.e., discovery mechanisms,
rather than connecting balances request traffic
to destinations directly, using real-time
services connect to their performance, reducing
corresponding linkerd tail latencies across the
instances, and treat application, and
these instances as if provides dynamic,
they were the scoped, logical routing
destination services. rules, enabling blue-
green deployments,


39

D1.1 Stakeholders Requirements Analysis

Development Continuous Orchestration,
Scalability & Elasticity
Platform Continuous, Integration Deployment & Management & Security Add-ons
Control
and Testing Packaging Monitoring
staging, canarying,
failover.
MirageOS Solo5 is the "base layer" Runs under Xen and Support for logging Increased security by Seamless scaling of data Rresult is an OCaml
to run and debug KVM hypervisors, and only. default in unikernels. structures through module for handling
MirageOS unikernels. lightweight hypervisors Irmin, a library for computation results and
All source code like BSD's bhyve. designing Git-like errors in an explicit and
dependencies of the Deploy in Amazon EC2 distributed databases, declarative manner
input application are and Google Compute with built-in branching, without resorting to
explicitly tracked, Engine. snapshoting, reverting exceptions
including all the libraries
Potential to specify a and auditing
required to implement version or range of capabilities.
kernel functionality. versions for a package
dependency.
OpenShift.io An online development Automatically create OpenShift.io Analytics Detection of vulnerable Not addressed Red Hat OpenShift
environment for containerized applies machine packages (indirectly Application Runtimes,
planning and developing development learning algorithms through analytics). pre-built containerized
hybrid cloud services environments with the based on the usage Container Health Index runtime foundations for
with prioritizable workspace management pattern of components. that inspects and grades microservices that
backlogs and kanban capabilities of Eclipse The data is gathered all of Red Hats own include support for
boards as well as Che, and using from various public data container products, as Node.js, Eclipse Vert.x,
coding, editing, and OpenShift Online, a sources such as Github, well as those from its WildFly Swarm and
debugging tools built on managed, multi-tenant Maven and NPM along ISV partners, to ensure others.
Eclipse Che. offering of Red Hat with our own internal they are secure and
Integrated and OpenShift. OpenShift data. stable.
automated CI/CD Integration of the
pipelines. Jenkins Pipeline plugins
to allowing developers
to assemble their build
pipeline. Pipeline
definitions are written
using a Groovy DSL.


40

D1.1 Stakeholders Requirements Analysis

Development Continuous Orchestration,
Scalability & Elasticity
Platform Continuous, Integration Deployment & Management & Security Add-ons
Control
and Testing Packaging Monitoring
Automatically create
Linux container based
environments without
the need to install
anything locally or deal
with docker commands
and Kubernetes
configuration (or YAML)
files.
OSv Rapidly building and Runs under hupervisors: OSv REST API to simplify Increased security by Cloud-init mechanism -
running an application KVM and Xen (fully), management. default in unikernels. providing per-instance
on OSv through VirtualBox and VMWare In-browser dashboard configuration
Capstan. (experimental). Deploy providing live updates parameters to an OSv
in Amazon EC2 (fully and including OS basics VM at boot time.
functional), Google such as memory usage
Compute Engine and CPU load,
(experimental). Tracepoints for all
Packaging and running system and application
an application on OSv functionality, JMX
through Capstan. endpoints (using the
Jolokia JMX-over-REST
connector),
Application-specific
metrics, which can be
added by the
application developer
Rumprun Rumprun does not build Runs under hypervisors Very limited monitoring Increased security by N.A. -
a toolchain, but creates (KVM and Xen), and on through remote syslog. default in unikernels.
wrappers around a bare metal. Rumprun
toolchain the developer can be used with or
supplies. without a POSIX'y
interface.


41

D1.1 Stakeholders Requirements Analysis

Development Continuous Orchestration,
Scalability & Elasticity
Platform Continuous, Integration Deployment & Management & Security Add-ons
Control
and Testing Packaging Monitoring
Rump kernels
essentially provide a
driver kit providing
easy-to-integrate
drivers, with the set of
drivers varying per
driver kit and using the
NetBSD anykernel
architecture to provide
unmodified NetBSD
kernel drivers.
Rkt A command line utility, Apply different Cluster orchestration rkt is developed with a Not addressed. -
acbuild, to build and configurations (like and management principle of "secure-by-
modify container isolation parameters) at through container default", and includes a
images, intended to both pod-level and at orchestration engine number of important
provide an image build the more granular per- Fleet (an open-source security features like
workflow independent application level. cluster scheduler support for SELinux,
of specific formats Support for two kinds of designed to treat a TPM measurement, and
(currently it supports pod (core execution unit group of machines as running app containers
ACI, OCI). of rkt) runtime though they shared an in hardware-isolated
environments: an init system), to be VMs.
immutable pod runtime replaced by Kubernetes
environment, and a in January 2018.
new, experimental
mutable pod runtime
environment.


42

D1.1 Stakeholders Requirements Analysis

Table 5: Market Players Analysis Perspectives

Platform Performance Integration with 3rd Community Adoption Maturity Pricing model Comments
party services

Docker High [115], [116] (with Extensible through High 40% market share Medium Docker Community Significant learning curve.
Czipri noting that in open APIs, plugins growth from March 2016 Edition: Free Differences on how it runs on
certain experiments, and drivers until March 2017 [Source: Docker Enterprise different host machines.
Docker spent a lot less Datadog] Edition: from $750 Complete and explanatory
CPU time being nearly per node per year documentation.
equivalent with bare-
metal)
IncludeOS High (Extremely small N.A. Low (41 contributors and 187 Low - v0.8 released Open source under Adequate documentation
disk- and memory forks in GitHub repository as in June 2016 Apache 2.0 licence
th
footprint, Very fast boot of May 29 , 2017) [Source:
time: <0.3 seconds GitHub]
according to
benchmarks [117])
Istio Not officially assessed Extending Envoy Medium - Support of key Low v0.10 Open source under Explanatory introduction and
yet Beta version proxy from Lyft industry players & strong released in May Apache 2.0 licence documentation
planned to track Kubernetes community interest (22 2017
performance testing, Calico - ongoing contributors and 147 forks
benchmark/comparison, on GitHub repository as of
th
performance regression June 14 , 2017) [Source:
[118] GitHub]
linkerd Medium [119] Docker-compose, Low (43 contributors and 198 Medium v1.1.0 Open source under Complete and explanatory
DC/OS, Mesos, forks on GitHub repository as released in June Apache 2.0 licence documentation.
th
Kubernetes of June 14 , 2017) [Source: 2017
GitHub]
MirageOS High [120], [121] Modular OS Low (34 contributors and 122 Medium v3.0 Open source under Adequate documentation.
libraries, which can forks on mirage/mirage released in February ISC License (with
be switched when GitHub repository as of May 2017 some exceptions
th
needed. 29 , 2017) [Source: GitHub] released under
LGPLv2)


43

D1.1 Stakeholders Requirements Analysis

Platform Performance Integration with 3rd Community Adoption Maturity Pricing model Comments
party services

OpenShift.io Not officially assessed fabric8, Jenkins, Low (12 contributors and 23 Low announced Open source (exact Minimal documentation at the
yet Eclipse Che, forks on GitHub repository as and launched in May license not moment.
th
OpenJDK, PCP, of June 14 , 2017) [Source: 2017, developer announced yet)
WildFly Swarm, GitHub] preview available
Eclipse Vert.x, upon request
Spring Boot,
OpenShift
Kubernetes

OSv High (A typical Capstan Jolokia JMX-via- Low (87 contributors and 458 Low currently on Open source, -
image is only 12-20MB JSON-REST forks on GitHub as of May beta version distributed under
th
larger than the connector, 29 , 2017) [Source: GitHub] the 3-clause BSD
application, and adds ~3 NewRelic license
seconds to the build
time, according to the
official website and
third-party evaluations
conducted)
Rumprun High [122] Work in progress. Low (16 contributors and 75 Low still on Open source, -
Travis CI integration forks on experimental phase distributed under a
for new releases. rumpkernel/rumprun 2-clause BSD license
GitHub repository as of May
th
29 , 2017) [Source: GitHub]


44

D1.1 Stakeholders Requirements Analysis

Platform Performance Integration with 3rd Community Adoption Maturity Pricing model Comments
party services

Rkt Medium (especially init systems (like Medium (185 contributors Medium Open source under -
when it comes to systemd, upstart). and 699 forks on rkt/rkt Apache 2.0 license
container startup time in Kubernetes (via GitHub repository as of May
th
comparison to Docker rktnetes), Nomad, 29 , 2017) [Source: GitHub]
[123]) Mesos, Mulled,
Quay.io, SELinux,
cAdvisor.
Support for
swappable
execution engines.
Natively run Docker
images.


45

D1.1 Stakeholders Requirements Analysis

In a In a largely unchartered and rapidly evolving cloud landscape consisting of DevOps, Containers and
Unikernels, Unicorn is positioned as a novel DevOps as a Service with a unique value proposition in simplifying
the design, deployment and management of secure and elastic by design, multi-cloud services. In contrast to
the existing platforms (that were analysed in the previous paragraphs and typically offer rather targeted
solutions), Unicorn will address different DevOps phases, ranging from Development, Continuous Integration &
Testing, and Continuous Deployment & Packaging, to Orchestration, Management & Monitoring in a solid and
consistent manner. From the technology watch and market analysis initially conducted (and that will be ongoing
throughout the project implementation), Istio and OpenShift.io are the platforms that are directly related to
Unicorn yet, taking into account that they were only very recently announced, they signify that Unicorn is
attuned to the actual stakeholders needs in the rapidly growing cloud DevOps market.

In particular, in respect to micro-services, Unicorn will facilitate the DevOps teams within ICT SMEs (that
represent the core target audience of Unicorn) in adopting the micro-service architectural paradigm by providing
a unified web IDE for development, deployment and management of cloud applications. Going beyond the
offerings of the existing platforms, Unicorn puts particular emphasis on security, scalability and elasticity control
enabled through policy and constraint definition, as well as through continuous risk and vulnerability
assessment, and complements its solution with advanced orchestration and monitoring capabilities. As far as
the container and unikernel technologies for cloud application packaging and deployment are concerned,
Unicorn will pursue, in order to facilitate adoption, to support popular containerized execution environments
(e.g., Docker, CoreOS) and to orchestrate containers/unikernels that will be able to host complex and resource
intensive cloud applications in a minimal, yet persistent, manner for the DevOps team, based on the continuous
efforts of the project to probe the EU ICT industry for the technologies truly dominating their interests and
needs.


46

D1.1 Stakeholders Requirements Analysis

6 Requirement Analysis Scheme
This Chapter documents the key findings of the analysis performed on the results of the disseminated online
survey and the personal interviews.

6.1 Interviewee Profile


Altogether 20 organisations operating in multiple and different fields participated in the interview process and
are listed in Table 6. These organisations are primarily based in the European Union with the larger organisations
(e.g., SAP, HP) also spanning their business operations across the globe. Figure 13 depicts the number of
employees working in the IT department of each organisation. From this figure, we observe that most of the
organisations interviewed identify themselves as Startups/SMEs and have less than 25 employees (65%) in their
IT department, while 15% have a number of employees between 26 and 50. In turn, 15% of the interviewed
organisations identify themselves as large organisations and feature more than 101 employees in their IT
department. In order not to limit the target audience of Unicorn, the organisations interviewed were carefully
selected so as to operate in multiple and different business domains and geographic regions, as shown in Table
6 and Figure 12.

Table 6: Organisations Participated in Interview Process

Organisation Organisation Type Interviewee Role Country



CAS A.G. Pilot Management Germany
Cocoon Not Related to Unicorn CTO Cyprus
CRUK Institute Not Related to Unicorn Chief Architect United Kingdom
CYTA Not Related to Unicorn System/Net Admin Cyprus
FxPro Not Related to Unicorn CTO United Kingdom (operates
globally)
EduportalGR Not Related to Unicorn Chief Architect Greece
Hopu CINCUBATOR CTO Spain
HP-Cloud Not Related to Unicorn Programmer US (operates globally)
Ideas2Life Not Related to Unicorn CTO Cyprus
LockUp CINCUBATOR CTO Spain
Nubedian A.G. CyberForum DevOps Engineer Germany
PointRF Not Related to Unicorn Chief Architect Israel (operates globally)
Proasistech CINCUBATOR Management Spain
Redikod Pilot Programmer Sweden/Scandinavia
SAP Innovation Not Related to Unicorn Programmer Germany (operates globally)
Suite5 Pilot CTO United Kingdom
Swiftflats CINCUBATOR Programmer Spain
Tursoft health Not Related to Unicorn Chief Architect Turkey/Greece
Ubitech Pilot Programmer Greece
Yellowmap A.G. CyberForum DevOps Engineer Germany/Austria/Switzerland


47

D1.1 Stakeholders Requirements Analysis

Telecommunications,
Mobile/Web Development


Figure 12: Organisation Operating Business Domains as Identified by Interviewees

6.2 Unicorn Survey and Interview Study Key Findings


The following subsections document the key findings of the Unicorn survey and interview study.


Figure 13: Number of Employees in IT department

6.2.1 Unclear Distinction Between Software Programmer and DevOps Engineer in Startups
From the interview process, it was revealed that there is an unclear distinction in the silver lining between the
role(s) of a Software Programmer and DevOps engineer, especially for organisations identifying themselves as
Startups with less than 25 employees. In particular, programmers are (usually) tightly involved in the software
delivery cycle, up taking, management tasks such as designing security enforcement and monitoring policies,
and (virtual) infrastructure provisioning and configuration. When asked, programmers identified security
enforcement and elastic resource scaling as the main challenges they face due to lack of experience and time to
learn related technologies and methodologies. These findings confirm the developer productivity reports from
DZone (2017) and RebelLabs (2016).


48

D1.1 Stakeholders Requirements Analysis


Figure 14: Interviewee Role in Organisation

6.2.2 Programming Frameworks are Increasing Annotation-Based Programming Paradigm Adoption


The majority (80%) of the interview respondents mention that they have adopted annotation-based
programming of some sort. When asked during the interview process, interviewees denote that other than
generating source code documentation, code annotations are widely used for source code project configuration,
data and API modelling, logging, monitoring and testing. In particular, annotations are mostly used by the
programmers of organisations that have adopted popular programming frameworks, such as Spring for Java
(55%), Node.js for Javascript (25%) and Django for Python (25%). The popularity of the Spring framework
confirms the RebelLabs (2017) development report, which emphasises on micro-service framework adoption
for java.


Figure 15: Usage of Annotation-based Programming Paradigm by Interviewees


49

D1.1 Stakeholders Requirements Analysis

Android, iOS


Figure 16: Popular Programming Frameworks Used by Interviewees

6.2.3 Collaboration Tools are now Industry Standard Practices while Continuous Integration and Delivery
Tool Adoption is Facing Serious Challenges
Almost all interview respondents (95%) mention that the employees of their organisation use at least one
collaboration tool. In particular, all positive respondents mention that a collaboration tool for source code
version control is always used (mainly git), while more than 70% of software development teams also use at
least one collaborative tool for communication (e.g., Slack, Skype) and task management (e.g., Pivotal tracking,
Trello, Team).

Figure 17: Usage of Collaboration Tools Among Employees of Organisation

Based on the results of our survey, 60% of the respondents state that they are currently using continuous
integration tools in their application development cycle. This number is slightly lower than the percentages in
studies such as GitLabs developer report (2016). Moreover, Apache Jenkins (55%) was noted as the most
popular CI tool of choice, although almost one out of three respondents are currently not using any CI/CD tool.
Interestingly when personally questioned, these respondents usually state that lack time (50%) and lack of skills
(45%), is preventing them from fully adopting a CI/CD pipeline. On the other hand, respondents with experience
in utilizing CI/CD tools, mention that the most challenging aspects of fully embracing a CI/CD software delivery


50

D1.1 Stakeholders Requirements Analysis

pipeline is the lack of a unified tool (55%) and extreme difficulties found in environment setup and, in particular,
integrating in the cycle automated technologies (40%) such as resource scaling, runtime security enforcement
and testing.


Figure 18: Popularity of CI/CD Frameworks Embraced by Surveyed Organisations


Figure 19: Challenges Preventing Full Adoption of CI/CD Pipeline

6.2.4 Cloud IDEs are Becoming Popular but for Large(r) Development Teams
Our survey highlights that the transition from traditional desktop IDEs to Cloud IDEs has already started.
Particularly, 45% of our survey respondents state that they are currently using a Cloud IDE for cloud application
development. We note that this number is rather high when comparing to StackOverflow (2016, 2017)
developer reports placing general adoption around 15%. However, we note that our survey targets cloud
application development where Cloud IDEs prevail. Also, from the results of our survey it is revealed that the
most popular Cloud IDEs are Eclipse Che (40%), SAP Hana (20%) and Cloud9 (15%). Moreover, when discussing


51

D1.1 Stakeholders Requirements Analysis

with the interviewed IT professionals, it is revealed that organisations comprised of larger development teams
(>11 IT employees) are more keen in adopting Cloud IDEs as they combine development with CI/CD tool
integration for automation, collaboration, software delivery and communication, which are absolute necessities.


Figure 20: Cloud IDE Embracement by Interviewed Organisations

On the other hand, the majority of those not adopting a Cloud IDE for development state that they are happy
using their desktop IDE (82%) and that they do not foresee in the immediate future the transitioning to a Cloud
IDE. Another notable percentage (30%) also reports that performance related issues also prevent Cloud IDE
adoption. The first claim was a particular discussion point with interviewees from organisations identified as
Startups and comprised of small development teams. To better understand this, we asked about the software
development process, where it was revealed that a single developer in such teams is usually in charge of the
coding of an entire project, or developers are in charge or specific tasks (e.g., front-end, back-end) and
integration of tasks happens at the end of a development cycle, thus, limiting, at the moment, the need of a
cloud IDE.

Performance
related issues


Figure 21: Popular reasons preventing Cloud IDE adoption from responders not using Cloud IDEs


52

D1.1 Stakeholders Requirements Analysis

6.2.5 Micro-service Architectural Approach is Becoming a Cloud Trend Especially in the IoT and SaaS
domains
Micro-services are currently used in production by 40% of our respondents, while another 30% is currently
experimenting for ultimately production deployment. These numbers confirm DZones (2017) and Lightbends
(2016) DevOps reports. Interestingly, organisations adopting micro-services in production have origins from the
IoT and SaaS domains while the organisations experimenting originate from the business analytics and (location)
recommendation services sector. Moreover, from the above organisations, the micro-service architectural
pattern is used for data-serving (100%), business logic (83%) and the front-end (66%). On the other hand, only
10% of the interviewees mentioned that micro-services are not of interest with the responses coming from the
telecom and educational business domain.


Figure 22: Micro-service Architecture Adoption by Interviewed Organisations

6.2.6 Containerized Solutions are Following Micro-service Adoption Trends


With the increase in the interest for micro-services architectural patterns, interviewed organisations also seem
to be utilizing containerized solutions for application deployment with 20% of the respondents stating that
currently they are running containerized applications in production, while another 35% is seriously planning and
experimenting to ultimately use this technology in production. Similarly, to micro-services, these numbers
confirm DZones (2017) and Lightbends (2016) DevOps reports. Also, when questioned, only 36% of the
respondents state that their entire application deployment is containerized. The rest (64%), reveal that
containers are utilized only for the dynamic, scalable and stateless service part comprising their application
deployment, thus adopting a mixture of (virtualized) solutions for their cloud execution environments.


53

D1.1 Stakeholders Requirements Analysis


Figure 23: Containerized Solution Adoption by Interviewed Organisations

Interestingly, it is acknowledged that the container domain introduces a number of challenges for developers.
In particular, interviewees with experience in deploying containerized applications mention that, the top
challenges in the container domain include: performance and application monitoring (55%), service orchestration
(50%), database access (45%), lack of experience (45%) and auto-scaling (40%). These challenges confirm studies
from RightScale (2017) and DZone (2017), and are highly relevant to the Unicorn project. What is more,
challenges related to reducing container security threats such as striping containers from attacking interfaces
(35%), secure resource acquisition (30%), fast boot times (25%) and reducing image sizes (20%) are also relevant
to the advancement of unikernels and consequently to the Unicorn project. Finally, it must be noted that almost
all organisations (92%) have adopted, at some point, Docker as the containerized technology for their
applications, with other preferred containerized solutions such as Kubernetes (33%) and Swarm (25%) also
tightly coupled to Docker for cluster management when containers are deployed in production. Therefore,
Docker is a technology that must be targeted by Unicorn for containerized cloud execution environments as its
stakeholders, either large or small in size, identify Docker as their technology of choice.


Figure 24: Containerized Solution Adoption Challenges as Identified by Interviewed Organisations


54

D1.1 Stakeholders Requirements Analysis


Figure 25: Containerized Solutions that have been adopted by those using or considering containerization

6.2.7 Multi-Cloud Deployment Model Adoption and Challenges


Our survey is inline with Gartners Magic Quadrant (2016) reports which reveal that the top cloud provider is
Amazon Web services (AWS), followed by Microsoft Azure and Openstack, which are the most prominent cloud
solutions for private cloud infrastructural deployments. However, more interestingly is that 25% of our survey
respondents are currently following a multi-cloud deployment approach while another 25% is also
experimenting and planning to do so. These number are significantly lower than reports from RightScale (2017)
which put the percentage of organisations adopting hybrid-cloud over 70%. However, one must not forget that
in the Startup eco-system, companies start small adopting one cloud provider and then experiment as they scale,
and 20% of our respondents also state they are playing around with multi-cloud deployments. On the other
hand, those who are not planning to adopt a multi-cloud approach state that this is due to significant security
reasons for moving data across cloud regions or are happy with just using one cloud provider.


Figure 26: Multi-Cloud Deployment Model Adoption by Interviewee Organisations

Furthermore, by personally talking with interviewees to obtain user stories, we identified that different multi-
cloud challenges arise based on the particular deployment strategy followed by each organisation. Thus, instead
of simply compiling a list of challenges, we further investigated when and where is each challenge applicable. In


55

D1.1 Stakeholders Requirements Analysis

particular, MC2 (one cloud provider multiple availability zones), is a popular multi-cloud deployment model2.
For organisations adopting a multi-cloud deployment model resembling MC2 (one cloud provider multiple
availability zones) security reasons for moving data across cloud sites/regions and trust/compliance issues are
of extreme concern. Organisations adopting the MC2 deployment model originate mainly from Germany and
UK, and operate in the e-health or social assistance business domains, where such organisations are obligated
to comply with strict data movement national laws preventing sensitive client data to be hosted outside national
borders and for this reason inter-connected private cloud deployments are preferred.


Figure 27: Popular Cloud Providers

On the other hand, challenges related to portability, vendor locking and a lack of unified management tools, are
of extreme concern for organisations that adopt the popular MC3 and MC4 multi-cloud deployment models. In
particular, these models mainly use multiple cloud providers to run their services, targeting load balancing and
latency reduction when serving content to clients, and thus, these models are highly relevant to
location/recommendation based services, SaaS cloud solutions and IoT applications.


2
Multi-cloud deployment models are described in detail in Section 3.2


56

D1.1 Stakeholders Requirements Analysis


Figure 28: Multi-Cloud Adoption Challenges

6.2.8 Cloud Monitoring Adoption and Challenges


Monitoring is employed by all interviewed organisations with monitoring targeting various levels of the
application lifecycle and execution environment. In particular, respondents usually stated that service
availability (80%), API access (60%) and the underlying infrastructure (55%) are monitored by deploying either
in-house or general-purpose monitoring tools. Interestingly, as the monitoring level becomes more specialized
and moves closer to the client side (e.g., application behaviour, client interaction, transactions, etc.),
organisations start to face challenges as monitoring tools must be extended, customized and tailored to the
organisation monitoring needs.


Figure 29: Monitoring Level Targets as Responded by Interviewed Organisations

In general, multiple and different monitoring solutions are used. Interestingly, all respondents stated that they
must resort to using more than one monitoring tool for their needs with 70% is dissatisfied by this fact. In
particular, 65-70% of the respondents mentioning that they use mostly in-house developed monitoring tools
and/or general purpose open-source tools. On the other hand, 40% claim to be using tools offered by the cloud


57

D1.1 Stakeholders Requirements Analysis

provider, while 35% of the respondents mention that third-party monitoring-as-a-service tools (e.g., NewRelic,
Datadog) are used for their monitoring needs.


Figure 30: Monitoring Tool Type Adoption by Interviewed Organisations

With regard to challenges, respondents state that the most prominent need arises from the lack of parameter
tuning by monitoring tools to optimise performance, quality and cost (70%). In turn, as multiple monitoring tools
must be used by organisations, integrating them in the execution environment or finding a monitoring tool that
can be used at different and multiple levels, is another prominent challenge/need stated by the interviewees
(70%). Interestingly, 50% of the interviewees stated that accessing/processing historic monitoring data is
another important challenge. Also monitoring tool portability across cloud platforms (40%), as well as, providing
multi-cloud monitoring support (40%) are relevant to the project. On the other hand, accessing real-time
monitoring data (25%) and plotting data (5%) seem to be covered by the offered tools and are not considered
as current challenges in the monitoring domain.


Figure 31: Monitoring Challenges Faced by the Interviewed Organisations


58

D1.1 Stakeholders Requirements Analysis

6.2.9 Elastic Scaling Adoption and Challenges
The results of our survey show that most of our respondents (65%) do not currently use elastic scaling, which
contradicts with popular cloud surveys and reports from RightScale (2017) and Gartner (2016). However, the
majority of the respondents of our survey are SMEs/Startups with services recently introduced to the public.
Thus, although they are currently not using elasticity scaling almost all of these (95%) highlight that elasticity is
needed (95%) but certain challenges must be overcome first, with the most prominent being lack of experience
of how elasticity works, followed by how to configure the auto-scaling process and how to budget constrain auto-
scaling.


Figure 32: Elastic Scaling Adoption

In turn, those who are currently using elasticity for their application scaling, originate from the IoT, SaaS cloud
solutions and recommendation/location service offering business domains. Horizontal scaling is the most
preferable way to scale resources for most of the respondents (71%), and is adopted mainly for load balancing.
These organisations mostly adopt the tools provided by their cloud provider (71%) with the second preferred
option being in-house developed tools (57%). This is an opposite picture from monitoring where in-house and
general-purpose monitoring tools are more preferred options than the tools offered by the cloud provider. The
justification for this is that developing an auto-scaling tool is extremely challenging and therefore resort to using
what is offered by the cloud provider even if this restricts deployment to a single provider.


Figure 33 Elastic Scaling Type


59

D1.1 Stakeholders Requirements Analysis

Interestingly, the most prominent challenge in elastic scaling for organisations is parameter tuning to optimize
the performance, cost and quality of their services (65%) which is related with the second most challenging task,
the lack of experience. Respondents that are currently using the tools provided by their Cloud provider and even
the ones that havent yet adopted elastic scaling, state that configuring the elasticity service for their application
needs, is a non-trivial task due to the insufficient knowledge they possess, therefore, the need for a simple but
accurate elasticity control comes to the foreground.


Figure 34: Elasticity tools used by organizations have adopted elastic scaling as part of their ALM

Another major challenge preventing companies for adopting elastic scaling are budget constraints (50%). Using
elastic services offered by cloud providers, especially when they are not configured properly, the amount spent
is significantly larger than the amount earned. Other challenges mentioned by one third of the respondents, are
elastic scaling across multiple cloud regions and providers and lack of a unified auto scaling environment. These
challenges address the need for a unified auto scaling tool, able to orchestrate instances across multiple cloud
sites, providers and regions.


Figure 35: Elastic Scaling Adoption Challenges


60

D1.1 Stakeholders Requirements Analysis

6.2.10 When is Security Considered in the Lifecycle of an Application
From the interview process, respondents answers to the question when is security considered in the
application lifecycle, reveal that there is no norm to when security is taken into consideration. Particularly, 35%
of the respondents state that security is considered at the requirement phase, 30% state at the programming
phase, 25% at the design phase, while 10% mention that security is only considered after deploying the
application and detecting where security is needed. At this point, any security issues are dealt with and a re-
deployment is issued. These numbers confirm the study conducted by Veracode (2016), showing that there is
no norm for when to integrate security. This is a highly relevant requirement to the project as security cannot
simply be assumed that it will be always considered at the requirement or design phase and therefore
integrating security or customizing security, even at development or runtime, when permitted, must be taken
into consideration.


Figure 36: Stage of Application Lifecycle at which Security is Considered by Interviewed Organisations

6.2.11 Cloud Security Enforcement and Privacy Preservation Challenges


Respondents of our interview process state that the major challenges faced include: vulnerability detection
(16/20), data movement compliance (15/20), information flow tracking (14/20) and privacy protection (13/20).
These results are in line with the findings of Veracode (2016), showing that sensitive data exposure and runtime
software vulnerability are the prime concern of most SMEs and Startups, therefore, they remain open
challenges. These challenges are highly relevant with the requirements of the project, pointing out the need of
a mechanism for data privacy enforcement and continuous vulnerability assessment. On the other hand,
challenges such as web firewalling (15/20), SQL injection prevention (13/20), static code analysis (10/20) cross-
site forgery/scripting (9/20) and authorization permission management (9/20), seem to be addressable by most
of the interviewed stakeholders and are less relevant to the project.


61

D1.1 Stakeholders Requirements Analysis


Figure 37: Security Mechanisms Adopted by Interviewed Organisations (#1)


Figure 38: Security Mechanisms Adopted by Interviewed Organisations (#2)


62

D1.1 Stakeholders Requirements Analysis


Figure 39: Security Mechanisms Adopted by Interviewed Organisations (#3)


63

D1.1 Stakeholders Requirements Analysis

7 Unicorn System Requirements
In this Chapter we will elaborate on the system functional and non-functional requirements for the Unicorn
platform and eco-system that are derived by the results of the requirement collection methodology described
in Chapters 4 and 5.

7.1 Functional Requirements


Functional requirements represent the list of system properties that need to be implemented and finally
supported within the context of the Unicorn ecosystem and platform. This includes all behavioural aspects of
the system components after taking into consideration the identified roles of the Unicorn ecosystem, as
documented in Section 5.2. These requirements are logically grouped per role. We have followed a consistent
and structured way of representing the requirements which will allow us to further define the detailed reference
architecture for the Unicorn platform in the forthcoming deliverable denoted as D1.2. In the section 10.1 of the
Annex we provide a table listing all the identified Unicorn functional requirements while the following listings
elaborate on the description of each requirement. Table 7 provides an overview of the mapping of functional
requirements to user roles. Finally, we note that to derive the functional requirements referring to security
enforcement capabilities offered to Unicorn users, a threat analysis model (asset, threat, vulnerability, and
countermeasure) is required. In order to reduce repetition, threat analysis for the particular security and privacy
enforcement mechanisms offered by Unicorn will be introduced in the respected deliverable, denoted as D4.1.

ID FR.1

Title Develop cloud application based on code annotation design libraries and define runtime
policies and constraints

User Roles Cloud Application Developer

Description The Unicorn platform must provide cloud application developers with design libraries to
annotate the source code of their cloud application under development, for monitoring,
resource management, security and data privacy policy and constraint enforcement point
definition. Annotated policies depending on the scope supported by the Unicorn platform can
be defined at various application granularity levels (e.g., entire application, particular service,
code segment). Unicorn users must be able to use the annotated entities without any further
modification in the business logic of the under development application. This practically
means that policy and constraint enforcement is totally transparent to the developer and will
take place in the cloud execution container. Hence, metadata annotations (e.g., monitoring)
relate to respected Unicorn policy-enforcement enablers (e.g., handler collecting the
annotated monitoring data) that will generate/transform source code at design time and/or
be synchronized at runtime with the Core Context Model (FR.13) upon instantiation of the
cloud execution environment.

ID FR.2

Title Securely register and manage cloud provider credentials


64

D1.1 Stakeholders Requirements Analysis

User Roles Cloud Application Product Manager, Cloud Application Admin, Unicorn Developer

Description The Unicorn platform must provide the means to support cloud provider credential
management by offering secure management and storage of access credentials (e.g.,
user/password pairings, API access tokens) for Unicorn users. This practically means that users
are not required to provide their credentials each time an application deployment is initiated
or when a request/query for managing the application lifecycle is conducted (including re-
deployment of an updated version of an application).

ID FR.3

Title Search interface for extracting underlying programmable cloud offerings and capability
metadata descriptions

User Roles Cloud Application Product Manager

Description Unicorn must expose through its unified dashboard a search interface providing its users with
the ability to browse for cloud offerings and cloud provider services capabilities, obtain
intuitive metadata descriptions and filter the results to limit the returned result set(s). The
search interface will be provided as a graphical alternative for users instead of using directly
the Unicorn Unified API (FR.15).

ID FR.4

Title Creation of Unicorn-compliant cloud application deployment assembly

User Roles Cloud Application Product Manager

Description The Unicorn platform must provide its users with a standardized, transparent and
infrastructure-agnostic process to create and feed the Unicorn platform with a deployment
assembly for the application to be deployed. Unicorn adopts the notion of a directed service
graph, where nodes represent the (micro-) services composing the cloud application and
edges represent the relationship(s) and inter-dependencies between services. Nodes are
described by a number of attributes denoting resource management parameters (e.g.,
requested memory, disk size, network interfaces), monitoring metrics to collect, cost
constraints and elastic scaling policies. In turn, relationships and inter-dependencies denote
the deployment order and restrictions limiting the security and data movement between
services. As a number of the attributes and parameters describing nodes and edges are also
available as code annotation policies (e.g., monitoring) at the application development phase
(FR.1), these will be automatically translated and added to the service graph description by
respected Unicorn enablers interpreting code annotations based on the Unicorn core context
model without any additional user effort (FR.13, FR.14). However, the final deployment
assembly bundling code artifacts, the standardized deployment description and deployment
requests will be automatically created (no additional effort) only when the user packaging the
application determines that the developed and described application is ready for deployment
by the Unicorn platform.


65

D1.1 Stakeholders Requirements Analysis

ID FR.5

Title Cloud application deployment bootstrapping to a (multi-) cloud execution environment

User Roles Cloud Application Admin, Cloud Provider, Unicorn Developer

Description The Unicorn platform must provide its users with the means to deploy their compliant
applications from the Unicorn graphical interface after users have developed their application
using the provided design libraries (FR.1) and have created a deployment assembly (FR.4).
Users should also be notified of the status of the deployment (success, failed) and in the case
of a failed deployment, the response should include a descriptive reasoning as to what
problem occurred. The application deployment is the most critical process and includes a
number of steps, defined below, that must be performed in order for the Unicorn-compliant
application to be operational:
Parse deployment assembly (FR.4)
Verify validity of defined runtime policy and constraints and assure all annotations can
be interpreted and handled by the respected Unicorn enablers (e.g., monitoring,
security enforcement) (FR.6)
Derive (near-) optimal application placement plan (FR.11)
Based on placement plan, instantiate resources and services to establish an operation
(multi-cloud) execution environment (FR.16)
Instantiate required Unicorn runtime enablers to enforce runtime policies and
constraints and verify operation status (FR.14)

As this process is critical and only if all steps are successful, a deployment may be established,
the entire bootstrapping process must be transactional.

ID FR.6

Title Deployment assembly integrity validation

User Roles Cloud Application Tester, Unicorn Developer

Description Before the reservation of underlying programmable infrastructure, the Unicorn platform
should verify and validate the deployment assembly. This will be performed by Unicorn to
detect potential problems such as unreachable edges in the service graph description due to
antagonizing policies/constraints which could result to inaccessible nodes or optimization
criteria and circular dependencies which lead to a situation in which no valid evaluation order
exists, because none of the policies in the cycle may be orderly evaluated (FR.4). This process,
while not exhaustive, is an important aspect for Unicorn users and Unicorn component
developers (FR.18), performed at the pre-deployment phase to detect if there is a problem
preventing a successful deployment in order to reduce resource allocation costs of
unsuccessful large and complex deployments.


66

D1.1 Stakeholders Requirements Analysis

ID FR.7

Title Access application behavior and performance monitoring data

User Roles Cloud Application Admin

Description The Unicorn platform must provide its users with access to real-time and historical monitoring
data via the Unicorn graphical user interface. The monitoring data per se (e.g., response time,
service availability), the granularity level (e.g., entire application, service part) and the
intrusiveness (e.g., periodicity) at which monitoring data is collected and logged throughout
the entire lifespan of an application should be determined by the user via the provided
deployment assembly compiled based on users preferences and his/her annotated code
(FR.1). Monitoring annotations must allow users to handle and define counters, timers, traffic
interceptors and custom metric types to gather resource utilization, application feature
behaviour and performance from single application (micro-) instances, as well as aggregated
overviews of metrics across application service tiers and availability regions in order to
successfully assess the performance, scalability and security of their application seamlessly
across multiple cloud offerings through one unified interface offered by Unicorn.

ID FR.8

Title Real-Time notification and alerting of security incidents and QoS guarantees

User Roles Cloud Application Admin

Description The Unicorn platform must have the ability to notify and alert through the Unicorn graphical
user interface its users of events classified either by: (i) the platforms security enforcement
enablers, such as suspicious incidents (e.g., a vulnerability detected); or by the monitoring
enabler analytics process, such as events based on certain user-defined criteria (e.g., metric
threshold violation). In turn, the Unicorn platform must detect QoS policy violations on
provisioned services in operational cloud environments and also notify users about these
violations in order for them to take into consideration and, possibly, act upon.

ID FR.9

Title Autonomic management of deployed cloud applications and real-time adaptation based on
intelligent decision-making mechanisms

User Roles Cloud Application Admin, Cloud Provider

Description Upon the initial placement of an application over a programmable infrastructure, possibly
spanning across multiple cloud provider offerings, the Unicorn platform must provide the
means to manage the operational environment in an autonomic manner. This includes real-
time adaption where the execution environment of an application may be reconfigured based
on conditions and high-level policy constraints given by the user via the deployment assembly
and extracted from the enabler interpreting elasticity code annotations. Therefore, adaptation
can be triggered towards the fulfilment of the user optimization objectives and may regard


67

D1.1 Stakeholders Requirements Analysis

scaling aspects (e.g., vertical/horizontal scaling), adaptation of the quality of provided services,
and/or monitoring intrusiveness (e.g., adapt periodicity). In order to support such intelligent
functionality, a set of distributed intelligent mechanisms must be designed and developed that
will be based on various optimization strategies target by the interested users in order to
optimize resource allocation across multi-cloud deployments for performance, cost, and data
locality.

ID FR.10

Title Manage the runtime lifecycle of a deployed cloud application

User Roles Cloud Application Admin, Unicorn Developer

Description The Unicorn platform must provide its users with the ability to manage both the state and the
runtime aspects of the application as driven by the Unicorn context model through the
Unicorn graphical user interface. State refers to the responsibility of the Unicorn platform to
handle requests for deployment, undeployment, start, pause, stop and migration of an
application to a cloud offering, and to make sure that applications are always in a consistent
state. To achieve this, the Unicorn platform must maintain an application lifecycle state
transition graph, which describes the valid state transitions from one state to another and
must incorporate asynchronous application state transitions for actions that require large
time frames for completion (e.g., deployment, migration). On the other hand, runtime aspects
refer to the Unicorn context model, where, after the application instantiation and during the
smooth execution of an application, changes may be requested such as reconsidering a policy
constraint (e.g., restricting data movement from one geographic region). In the case where
such changes can be satisfied by the current deployment (thus redeployment is not required),
then they must be reflected directly to the configuration of the Unicorn enablers handling the
runtime context of the aforementioned application.

ID FR.11

Title Application placement over programmable cloud execution environments

User Roles Cloud Application Developer, Cloud Application Product Manager, Cloud Application Admin,
Unicorn Developer

Description The Unicorn platform must support the placement of deployed applications over an available
programmable infrastructure which may expand over multiple cloud provider offerings.
Application placement may be defined either: (i) manually, by users in their deployment
assembly (e.g., the user specifically defines the resource requirements and offerings to
instantiate); or (ii) constraint-driven, where placement is realized at deployment time based
on the high-level policy objectives given by the user (e.g., follow fairness placement taking
into account cost budget, application geo-location, etc.). At this point, high-level user
objectives must be translated to low-level primitives that can be realized through appropriate
handling of the operational status of an applications components by the orchestration
mechanisms of the Unicorn platform to achieve (near-) optimal application placement. Upon
the initial placement, real-time adaption and reconfiguration of the execution environment


68

D1.1 Stakeholders Requirements Analysis

should be supported. Therefore, adaptation can be triggered towards the fulfilment of the
optimization objectives and may regard scaling aspects (e.g., vertical/horizontal scaling),
adaptation of the quality of provided services, and/or monitoring intrusiveness (e.g., adapt
periodicity).

ID FR.12

Title Register and manage cloud application owners

User Roles Unicorn Admin

Description The Unicorn Admin is responsible to approve and manage (e.g., modify, suspend, revoke
access) the user registrations in the Unicorn platform (denoted as cloud application admins).
Therefore, users must be registered to the Unicorn platform in order to obtain access to, the
maintained and distributed under Unicorn, artifacts (e.g., design libraries) and supported
cloud platforms for application deployment.

ID FR.13

Title Manage core context model

User Roles UNICORN Admin

Description The Unicorn platform must design and maintain a multi-facet core context model that will be
used by cloud application developers at design-time when annotating their code and at
runtime during users application context evaluation. The Core Context Model will be used by
cloud application developers at design-time when annotating their code and at runtime during
users application context evaluation. The Context Model should be, by definition, extensible
since it should allow explicit instantiations and, as a result, the business logic of various
components. The Context Model should be, by definition, extensible since it should allow
explicit instantiations and, as a result, the business logic of various components should heavily
rely on the Core Context Model. The creation, deletion and modification of the centralized Core
Context Model, along with versioning (and version deprecation) will be undertaken by the
Unicorn Admin.

ID FR.14

Title Register and Manage enablers interpreting Unicorn code annotations

User Roles Unicorn Admin


69

D1.1 Stakeholders Requirements Analysis

Description For the Unicorn platform, an enabler entails and conceptualizes the software components
hosted by the Unicorn orchestration service and/or in the (multi-) cloud execution
environment of deployed cloud applications; and is able to interpret the Unicorn core context
model (FR.13). Indicative components include orchestration performing runtime context-
evaluation upon deployment and the code annotation enablers which perform policy
enforcement such as monitoring, auto-scaling, security enforcement and data privacy
protection. These components should be updated when the context model is either extended
or modified since additional functional capabilities must always reflect the new version of the
core context model. As a result, it is important that the enablers of the Unicorn platform are
managed and maintained throughout their lifecycle, with the entity responsible for this task
being the Unicorn Admin.

ID FR.15

Title Unified API providing abstraction of resources and capabilities of underlying


programmable cloud execution environments

User Roles Cloud Application Product Manager, Unicorn Developer

Description The Unicorn platform must expose an API that will provide a standardized, consistent and yet
simplified view of the underlying cloud infrastructure, of the - supported by Unicorn - provider
environments, by means of standard information, offerings metadata and data models. This
will allow for authorized entities, including Unicorn sub-components (e.g., intelligent auto-
scaling, application placement), to query the Unicorn-compliant cloud providers in a
transparent and infrastructure agnostic manner, for provider supported offerings and their
metadata (e.g., supported container flavors, costs etc.) along with the capabilities supported
(e.g., container memory resizing). One of the main concerns in this task is the level of
granularity for the abstraction. On one hand, not all the details and characteristics of the
resources are necessary for Unicorn. On the other hand, excessive abstraction prevents
applications from over-provisioning unnecessary resources because of hidden resource
granularity decomposition details.

ID FR.16

Title Resource and service (de-)reservation over multi-cloud execution environments

User Roles Unicorn Developer

Description The Unicorn platform must provide a standardized and consistent interface providing the
means to (de-) reserve the appropriate resources and service offerings required for the (un-)
deployment of the considered application, even across multi-cloud execution environments.
This must include the setup and (de-) allocation of programmable infrastructural resources
including, but not limited to, computational, storage and networking for the deployment of
distributed applications in a scalable, dependable, secure and effective way over virtual
environments spanning across cloud sites, availability zones and/or regions. In order to
support multi-cloud deployments, the challenges of interacting and synchronizing resource
advertisement and allocation from multiple and heterogeneous cloud offering platforms must


70

D1.1 Stakeholders Requirements Analysis

be supported. This task will be undertaken by the Unicorn orchestrator and is tightly coupled
with the Unicorn bootstrapping process described in FR.5.

ID FR.17

Title Development of code annotation libraries

User Roles Unicorn Developer

Description The development, maintenance and modification of design libraries provided to Unicorn cloud
application developers for annotating their code with monitoring, resource management,
security and data privacy enforcement policies and constraints, is a task that will be
undertaken by Unicorn developers. This requirement relates to developing respective
metadata code annotations (e.g., for defining monitoring) and providing the means of
handling of code annotation interpretation and synchronization of the application business
logic with the Core Context Model (FR.13).

ID FR.18

Title Development of enablers interpreting Unicorn code annotations

User Roles Unicorn Developer

Description For the Unicorn platform, the Core Context Model entails design-time usage through code
annotations by cloud application developers and runtime usage. In particular, runtime usage
refers to the various components that rely their business logic to the model. Indicative
components include orchestration performing runtime context-evaluation upon deployment
and the code annotation enablers which perform policy enforcement such as monitoring,
auto-scaling, security enforcement and data privacy protection.

ID FR.19

Title Register and manage programmable infrastructure and service offerings

User Roles Cloud Provider

Description The available infrastructural resource and service offerings of a cloud provider have to be
registered to the Unicorn platform which will advertise and make them available through a
unified resource management API (FR.15). To achieve this, the Unicorn platform must provide
a standardized interface in which cloud offerings are registered and made available to the
platform in order to ease cloud provider on-boarding as well as updating and managing
offerings and their metadata from the provider-side. The notion of programmability" must
be served to show the granularity at which resources will be advertised so as to allow the


71

D1.1 Stakeholders Requirements Analysis

creation of proper cloud execution environments: provide preferences for the infrastructure
the code runs on (e.g., virtual hardware like servers, storage and networking) and its
configuration including additional provider services (e.g., customized storage solutions).

ID FR.20

Title Monitor cloud offering allocation and consumption

User Roles Cloud Provider

Description Advertised infrastructural resource and service offerings deployed through Unicorn must be
monitored at runtime in order to offer cloud providers with intuitive and high-level insights of
the current utilization of cloud offerings allocated and consumed by Unicorn users.

ID FR.21

Title QoS advertising and management

User Roles Cloud Provider

Description Cloud execution environments offer different QoS capabilities and guarantees for their
provided offerings either these refer to raw access to programmable resources such as
compute memory, storage and network resources or to bundled application execution
containers, while guarantees are also available for quota management, (prioritized) resource
reservation, traffic shaping and more. As QoS guarantees play an important role in multi-cloud
environment application placement (FR.11) and runtime adaptation decision-making (FR.9),
which favor cloud providers based on advertised QoS parameters, providers should be
provided with the means to alter and manage the QoS guarantees for the cloud offering
advertised through the Unicorn platform.

ID FR.22

Title Register and manage privacy preserving encrypted persistency mechanisms for restricting
data access and movement across cloud sites and availability zones

User Roles Cloud Application Developer, Cloud Application Admin, Unicorn Developer

Description The Unicorn platform must provide the means to allow its users to define at various
application granularity levels (e.g., entire application, service tier, data object) privacy
preserving policies which restrict access to exposed user data (e.g., entire database, database
table, password, SNN, etc.) by describing associations between types of access rules
depending on the data objects and circumstances under which this access should be allowed.
The context-aware security model (FR.13) will be used as the background method for
annotating data access objects (DAO), thus allowing for the dynamic enforcement of policy


72

D1.1 Stakeholders Requirements Analysis

rules when there are new data access attempts in order to encrypt data, protect sensitive data
exposure and restrict movement of data to cloud sites, availability zones or particular geo-
location zones (e.g., outside the EU) based on the defined user constraints. Therefore, during
application runtime, the privacy preserving enabler must be able to interpret annotated code
based on the mapping of the application business logic to the Core Context Model, provide
the essential decoupling between the access decisions and the points of use, and finally grant,
deny and manage any incoming data access requests.

ID FR.23

Title Register and manage persistent security enforcement mechanisms for runtime monitoring,
detecting and labeling of abnormal and intrusive cloud network traffic behavior

User Roles Cloud Application Admin, Cloud Provider

Description The Unicorn platform must provide its users with mechanisms capable of ensuring, at any
time, that the traffic exchanged with the cloud will not harm the (multi-cloud) application
execution environment while preserving the privacy of the data exposed and managed by the
application (FR.22). To achieve this, an IDS (Intrusion Detection System) will be implemented
at the cloud execution environment level where adaptive network and information flow
monitoring will be established at runtime to detect any in-bound or out-bound exfiltration of
information based on well-known communication channels, information flow patterns
observed through the usage of anomaly detection and pattern recognition algorithms. As
deployments of (micro-) execution containers may be restrictive in the means of resources,
the IDS will adapt the process for information flow tracking to restrict its runtime intrusiveness
based on low-cost approximate and adaptive monitoring techniques while offline processing
will be boosted performance-wise by encompassing GPU-accelerated techniques.

ID FR.24

Title Automated application source code and underlying cloud resource offering vulnerability
assessment, measurement and policy compliance evaluation

User Roles Cloud Application Admin, Cloud Provider

Description The Unicorn platform will provide its users with the mechanisms to ensure that their (multi-)
cloud application execution environment behaves, at runtime, as intended, and that the
security-enforcement and privacy preserving policies and data access rules are not violated.
To achieve this, Unicorn will provide the means for the runtime assessment of the application
execution environment against known vulnerabilities by performing security and benchmark
tests to detect potential security threats and privacy breaches. The level of intrusiveness of
the testing performed by the Unicorn platform will be configurable by users. After testing, the
Unicorn platform will report any suspicious activity and the measured risk exposure level to
the application administrator (FR.8) in order to immediately take action and prevent sensitive
data leakage and privacy breaches.


73

D1.1 Stakeholders Requirements Analysis

Table 7: Functional Requirements Relation to User Role

User Role Functional Requirements

Cloud Application FR.1 Develop cloud application based on code annotation design libraries and define
Developer runtime policies and constraints

FR.11 Application placement over programmable cloud execution environments

FR.22 Register and manage privacy preserving encrypted persistency mechanisms for
restricting data access and movement across cloud sites and availability zones

FR.23 Register and manage persistent security enforcement mechanisms for runtime
monitoring, detecting and labeling of abnormal and intrusive cloud network traffic
behavior

Cloud Application FR.2 Securely register and manage cloud provider credentials
Product Manager
FR.3 Search interface for extracting underlying programmable cloud execution
environment cloud offering and capability metadata descriptions

FR.4 Creation of Unicorn-compliant cloud application deployment assembly

FR.11 Application placement over programmable cloud execution environments

Cloud Application FR.6 Deployment assembly integrity validation


Tester

Cloud Application FR.2 Securely register and manage cloud provider credentials
Admin
FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.7 Access application behavior and performance monitoring data

FR.8 Real-Time notification and alerting of security incidents and QoS guarantees

FR.9 Autonomic management of deployed cloud applications and real-time adaptation
based on intelligent decision-making mechanisms

FR.10 Manage the runtime lifecycle of a deployed cloud application

FR.11 Application placement over programmable cloud execution environments

FR.22 Register and manage privacy preserving encrypted persistency mechanisms for
restricting data access and movement across cloud sites and availability zones

FR.23 Register and manage persistent security enforcement mechanisms for runtime
monitoring, detecting and labeling of abnormal and intrusive cloud network traffic
behavior


74

D1.1 Stakeholders Requirements Analysis


FR.24 Automated application source code and underlying cloud resource offering
vulnerability assessment, measurement and policy compliance evaluation

Unicorn Admin FR.12 Register and manage cloud application owners



FR.13 Manage core context model

FR.14 Register and Manage enablers interpreting Unicorn code annotations

Unicorn FR.2 Securely register and manage cloud provider credentials


Developer
FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.6 Deployment assembly integrity validation

FR.10 Manage the runtime lifecycle of a deployed cloud application

FR.11 Application placement over programmable cloud execution environments

FR.15 Unified API providing abstraction of resources and capabilities of underlying
programmable cloud execution environments

FR.16 Resource and service (de-)reservation over multi-cloud execution environments

FR.17 Development of code annotation libraries

FR.18 Development of enablers interpreting Unicorn code annotations

FR.22 Register and manage privacy preserving encrypted persistency mechanisms for
restricting data access and movement across cloud sites and availability zones

Cloud Provider FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.9 Autonomic management of deployed cloud applications and real-time adaptation
based on intelligent decision-making mechanisms

FR.19 Register and manage programmable infrastructure and service offerings

FR.20 Monitor cloud offering allocation and consumption

FR.21 QoS advertising and management

FR.24 Automated application source code and underlying cloud resource offering
vulnerability assessment, measurement and policy compliance evaluation


75

D1.1 Stakeholders Requirements Analysis

7.2 Non-Functional Requirements
Non-functional requirements relate to the desired quality aspects that should be satisfied by the architectural
components of the Unicorn eco-system that, in turn, must satisfy the functional requirements previously
introduced. To this end, the widely accepted, by the software and research community, ISO/IEC 25010:2011
software quality assurance model was selected to create a shared conceptualization of the non-technical
attributes [124]. The fundamental objective of the ISO/IEC 25010:2011 standard3 is to address some of the well-
known human biases that can adversely affect the delivery and perception of a software development project
while it also determines which quality characteristics will be taken into account when evaluating the properties
of a software product. The ISO/IEC 25010:2011 quality model classifies software quality in a structured set of
characteristics and sub-characteristics, as follows:

Functional suitability: It refers to a set of attributes that bear on the existence of a set of functions and
their specified properties. The functions are those that satisfy stated or implied needs. Indicative sub-
characteristics include: software functional completeness and functional correctness.
Reliability: It refers to a set of attributes that bear on the capability of software to maintain its level of
performance under stated conditions for a stated period of time. Indicative sub-characteristics include:
software maturity, fault tolerance, recoverability and reliability compliance.
Usability: It refers to a set of attributes that bear on the effort needed for use, and on the individual
assessment of such use, by a stated or implied set of users. Indicative sub-characteristics include:
understandability, learnability, operability, attractiveness and usability compliance.
Efficiency: It refers to a set of attributes that bear on the relationship between the level of performance
of the software and the amount of resources used, under stated conditions. Indicative sub-
characteristics include: time behaviour, resource utilization, latency, service availability and efficiency
compliance.
Maintainability: It refers to a set of attributes that bear on the effort needed to make specified
modifications. Indicative sub-characteristics include: analyzability, changeability, stability, testability
and maintainability compliance.
Portability: It refers to a set of attributes that bear on the ability of software to be transferred from one
environment to another. Indicative sub-characteristics include: adaptability, installability, co-existence
with other software, replaceability and portability compliance.
Security: It refers to a set of attributes that define the degree to which a product or system protects
information and data so that persons or other products or systems have the degree of data access
appropriate to their types and levels of authorization.
Compatibility: It refers to a set of attributes that define the degree to which a product, system or
component can exchange information with other products, systems or components, and/or perform its
required functions, while sharing the same hardware or software environment.

Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is an entity which
can be verified or measured in the software product. Attributes are not defined in the standard, as they vary
between different software products. An overview of the aforementioned characteristics is provided in the
following figure.


3
Note that ISO/IEC 25010 has replaced ISO/IEC 9126


76

D1.1 Stakeholders Requirements Analysis


Figure 40: Non-Technical Quality Aspects as Organised by ISO/IEC 25010:2011

After the selection of the quality model, the next step is to examine which attributes are related to the Unicorn
eco-system and how do they map to functional requirements. In the enumerated listings that follow, we make
a concrete mapping between the core quality model attributes and the functional requirements that they
correlate to. In parallel, for each non-functional requirement, a brief description of the Unicorn eco-system
relevant characteristics is also provided.

NR.1 Functional Suitability

Description This characteristic represents the degree to which a product or system provides
functions that meet stated and implied needs when used under specified conditions. This
characteristic is composed of the following sub-characteristics:

Functional completeness. Degree to which the set of functions covers all the
specified tasks and user objectives.
Functional correctness. Degree to which a product or system provides the
correct results with the needed degree of precision.
Functional appropriateness. Degree to which the functions facilitate the
accomplishment of specified tasks and objectives.

Functional FR.1 Develop cloud application based on code annotation design libraries and define
Requirements runtime policies and constraints

FR.4 Creation of Unicorn-compliant cloud application deployment assembly

FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.9 Autonomic management of deployed cloud applications and real-time adaptation
based on intelligent decision-making mechanisms

FR.13 Manage core context model

FR.14 Register and Manage enablers interpreting Unicorn code annotations


77

D1.1 Stakeholders Requirements Analysis

FR.15 Unified API for abstraction and searching of resources and capabilities of
underlying programmable cloud execution environments
FR.17 Development of code annotation libraries

FR.18 Development of enablers interpreting Unicorn code annotations

FR.21 QoS advertising and management

NR.2 Performance Efficiency

Description This characteristic represents the performance relative to the amount of resources used
under stated conditions. This characteristic is composed of the following sub-
characteristics:

Time behaviour. Degree to which the response and processing times and
throughput rates of a product or system, when performing its functions, meet
requirements.
Resource utilization. Degree to which the amounts and types of resources used
by a product or system, when performing its functions, meet requirements.
Capacity. Degree to which the maximum limits of a product or system parameter
meet requirements.

Performance under the context of UNICORN refers to the ability of the system to support
collaborative development allowing multiple users accessing the system at the same
time. Also for UNICORN to be efficient, the users need to know at any time what the
resource utilization of the system is. It should also provide fast encryption/decryption
times between services that communicate and it should provide the ability to effectively
use hardware resources of any type (e.g., GPUs) for complex and resource demanding
tasks such as performing intense analysis on information flow data in order to detect
potential malicious behaviours.

Functional FR.7 Access application behavior and performance monitoring data


Requirements
FR.8 Real-Time notification and alerting of security incidents and QoS guarantees

FR.9 Autonomic management of deployed cloud applications and real-time adaptation
based on intelligent decision-making mechanisms

FR.11 Application placement over programmable cloud execution environments

FR.16 Resource and service (de-)reservation over multi-cloud execution environments

FR.19 Register and manage programmable infrastructure and service offerings

FR.20 Monitor cloud offering allocation and consumption


78

D1.1 Stakeholders Requirements Analysis


FR.23 Register and manage persistent security enforcement mechanisms for runtime
monitoring, detecting and labeling of abnormal and intrusive cloud network traffic
behavior

NR.3 Compatibility

Description Degree to which a product, system or component can exchange information with other
products, systems or components, and/or perform its required functions, while sharing
the same hardware or software environment. This characteristic is composed of the
following sub-characteristics:

Co-existence. Degree to which a product can perform its required functions
efficiently while sharing a common environment and resources with other
products, without detrimental impact on any other product.
Interoperability. Degree to which two or more systems, products or
components can exchange information and use the information that has been
exchanged.

The UNICORN run-time components should be, architectural-wise and implementation-
wise, close to the industry. For this reason UNICORN will provide support to a number
of commonly used standards, standard syntax, APIs, widely available tools, technologies,
methodologies and best practices. The system should support abstractions which will
hide from developers and their applications details regarding the system and application
infrastructure. UNICORN will also support uniform service descriptions such as SLA
offerings with clear policies and guidelines.

Functional FR.1 Develop cloud application based on code annotation design libraries and define
Requirements runtime policies and constraints.

FR.2 Securely register and manage cloud provider credentials

FR.3 Search interface for extracting underlying programmable cloud offerings and
capability metadata descriptions

FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.7 Access application behavior and performance monitoring data

FR.8 Real-Time notification and alerting of security incidents and QoS guarantees

FR.11 Application placement over programmable cloud execution environments

FR.15 Unified API providing abstraction of resources and capabilities of underlying
programmable cloud execution environments


79

D1.1 Stakeholders Requirements Analysis

FR.18 Development of enablers interpreting Unicorn code annotations

FR.19 Register and manage programmable infrastructure and service offerings

FR.22 Register and manage privacy preserving encrypted persistency mechanisms for
restricting data access and movement across cloud sites and availability zones.

FR.23 Register and manage persistent security enforcement mechanisms for runtime
monitoring, detecting and labeling of abnormal and intrusive cloud network traffic
behaviour.

NR.4 Usability

Description Degree to which a product or system can be used by specified users to achieve specified
goals with effectiveness, efficiency and satisfaction in a specified context of use. This
characteristic is composed of the following sub-characteristics:

Appropriateness recognizability. Degree to which users can recognize whether a
product or system is appropriate for their needs.
Learnability. degree to which a product or system can be used by specified users
to achieve specified goals of learning to use the product or system with
effectiveness, efficiency, freedom from risk and satisfaction in a specified context
of use.
Operability. Degree to which a product or system has attributes that make it easy
to operate and control.
User error protection. Degree to which a system protects users against making
errors.
User interface aesthetics. Degree to which a user interface enables pleasing and
satisfying interaction for the user.
Accessibility. Degree to which a product or system can be used by people with
the widest range of characteristics and capabilities to achieve a specified goal in
a specified context of use.

Taking into consideration all the above characteristics of usability, the UNICORN platform
will support automatic and seamless deployment making it very easy to use and learn.
The development platform and tools will be hosted on the cloud and will be accessible
through a web browser. UNICORN will have all the content and user interface organized
logically and it will provide a presentation interface (e.g., menu and navigation, reporting,
user controls etc.)

Functional FR.1 Develop cloud application based on code annotation design libraries and define
Requirements runtime policies and constraints

FR.2 Securely register and manage cloud provider credentials


80

D1.1 Stakeholders Requirements Analysis


FR.3 Search interface for extracting underlying programmable cloud offerings and
capability metadata descriptions

FR.4 Creation of Unicorn-compliant cloud application deployment assembly

FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.7 Access application behaviour and performance monitoring data

FR.8 Real-Time notification and alerting of security incidents and QoS guarantees

FR.10 Manage the runtime lifecycle of a deployed cloud application

FR.12 Register and manage cloud application owners

FR.15 Unified API providing abstraction of resources and capabilities of underlying
programmable cloud execution environments

FR.16 Resource and service (de-)reservation over multi-cloud execution environments

FR.19 Register and manage programmable infrastructure and service offerings

FR.20 Monitor resource and service consumption

FR.21 QoS advertising and management

NR.5 Reliability

Description Degree to which a system, product or component performs specified functions under
specified conditions for a specified period of time. This characteristic is composed of the
following sub-characteristics:

Maturity. Degree to which a system, product or component meets needs for
reliability under normal operation.
Availability. Degree to which a system, product or component is operational and
accessible when required for use.
Fault tolerance. Degree to which a system, product or component operates as
intended despite the presence of hardware or software faults.
Recoverability. Degree to which, in the event of an interruption or a failure, a
product or system can recover the data directly affected and re-establish the
desired state of the system.


81

D1.1 Stakeholders Requirements Analysis

Within the context of UNICORN, specific mechanisms will be architecturally defined and
implemented that guarantee that any application can be securely deployed.

Functional
Requirements FR.4 Creation of Unicorn-compliant cloud application deployment assembly

FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.6 Deployment assembly integrity validation

FR.8 Real-Time notification and alerting of security incidents and QoS guarantees

FR.9 Autonomic management of deployed cloud applications and real-time adaptation
based on intelligent decision-making mechanisms

FR.11 Application placement over programmable cloud execution environments

FR.13 Manage core context model

FR.14 Register and Manage enablers interpreting Unicorn code annotations

FR.15 Unified API providing abstraction of resources and capabilities of underlying
programmable cloud execution environments

FR.21 QoS advertising and management

NR.6 Security

Description The degree to which a product or system protects information and data so that persons
or other products or systems have the degree of data access appropriate to their types
and levels of authorization. This characteristic is composed of the following
subcharacteristics:

Confidentiality. Degree to which a product or system ensures that data are
accessible only to those authorized to have access.
Integrity. Degree to which a system, product or component prevents
unauthorized access to, or modification of, computer programs or data.
Non-repudiation. degree to which actions or events can be proven to have taken
place, so that the events or actions cannot be repudiated later.
Accountability. Degree to which the actions of an entity can be traced uniquely
to the entity.
Authenticity. Degree to which the identity of a subject or resource can be proved
to be the one claimed.


82

D1.1 Stakeholders Requirements Analysis

One of the major focal points of UNICORN is to be able to provide to SMEs security
features for their cloud applications. For that reason UNICORN will incorporate a user
authentication and authorization system along with the ability to securely store and
manage various user credentials and cloud access tokens. UNICORN will provide a secure
end-to-end encrypted communication channel between the various components of a
cloud deployment and the ability for DevOps teams to secure application data according
to various policies and regulations.

Functional FR.1 Develop cloud application based on code annotation design libraries and define
Requirements runtime policies and constraints

FR.2 Securely register and manage cloud provider credentials

FR.4 Creation of Unicorn-compliant cloud application deployment assembly

FR.6 Deployment assembly integrity validation

FR.8 Real-Time notification and alerting of security incidents and QoS guarantees

FR.12 Register and manage cloud application owners

FR.13 Manage core context model

FR.21 QoS advertising and management

FR.22 Register and manage privacy preserving encrypted persistency mechanisms for
restricting data access and movement across cloud sites and availability zones

FR.23 Register and manage persistent security enforcement mechanisms for runtime
monitoring, detecting and labeling of abnormal and intrusive cloud network traffic
behaviour

FR.24 Automated application source code and underlying cloud resource offering
vulnerability assessment, measurement and policy compliance evaluation

NR.7 Maintainability

Description This characteristic represents the degree of effectiveness and efficiency with which a
product or system can be modified to improve it, correct it or adapt it to changes in
environment, and in requirements. This characteristic is composed of the following
subcharacteristics:

Modularity. Degree to which a system or computer program is composed of
discrete components such that a change to one component has minimal impact
on other components.


83

D1.1 Stakeholders Requirements Analysis

Reusability. Degree to which an asset can be used in more than one system, or in
building other assets.
Analysability. Degree of effectiveness and efficiency with which it is possible to
assess the impact on a product or system of an intended change to one or more
of its parts, or to diagnose a product for deficiencies or causes of failures, or to
identify parts to be modified.
Modifiability. Degree to which a product or system can be effectively and
efficiently modified without introducing defects or degrading existing product
quality.
Testability. Degree of effectiveness and efficiency with which test criteria can be
established for a system, product or component and tests can be performed to
determine whether those criteria have been met.

In order for UNICORN to be easily maintained, all the annotation libraries, the Core
Context Model, and the Cloud Application Enablers that will perform runtime policy
enforcement should incorporate the above mentioned features.

Functional FR.1 Develop cloud application based on code annotation design libraries and define
Requirements runtime policies and constraints

FR.2 Securely register and manage cloud provider credentials

FR.9 Autonomic management of deployed cloud applications and real-time adaptation
based on intelligent decision-making mechanisms

FR.10 Manage the runtime lifecycle of a deployed cloud application

FR.12 Register and manage cloud application owners

FR.13 Manage core context model

FR.14 Register and Manage enablers interpreting Unicorn code annotations

FR.17 Development of code annotation libraries

FR.18 Development of enablers interpreting Unicorn code annotations

FR.19 Register and manage programmable infrastructure and service offerings

FR.20 Monitor cloud offering allocation and consumption

NR.8 Portability


84

D1.1 Stakeholders Requirements Analysis

Description Degree of effectiveness and efficiency with which a system, product or component can
be transferred from one hardware, software or other operational or usage environment
to another. This characteristic is composed of the following subcharacteristics:

Adaptability. Degree to which a product or system can effectively and efficiently
be adapted for different or evolving hardware, software or other operational or
usage environments.
Installability. Degree of effectiveness and efficiency with which a product or
system can be successfully installed and/or uninstalled in a specified
environment.
Replaceability. Degree to which a product can replace another specified
software product for the same purpose in the same environment.

One of the most important requirements under the context of UNICORN is the
requirement of Portability. This requirement relates to the UNICORN Compliant Cloud
Applications that should be interoperable and functional in multiple operational
environments (multi-cloud environments). To this direction the adoption of various
4
commonly used standards (e.g., OASIS TOSCA ) which are infrastructure and
environment agnostic.

Functional FR.1 Develop cloud application based on code annotation design libraries and define
Requirements runtime policies and constraints

FR.4 Creation of Unicorn-compliant cloud application deployment assembly

FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution
environment

FR.11 Application placement over programmable cloud execution environments

FR.13 Manage core context model

FR.14 Register and Manage enablers interpreting Unicorn code annotations

FR.15 Unified API providing abstraction of resources and capabilities of underlying
programmable cloud execution environments

FR.16 Resource and service (de-)reservation over multi-cloud execution environments

FR.17 Development of code annotation libraries

FR.18 Development of enablers interpreting Unicorn code annotations

FR.19 Register and manage programmable infrastructure and service offerings

FR.21 QoS advertising and management


4
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca


85

D1.1 Stakeholders Requirements Analysis

FR.22 Register and manage privacy preserving encrypted persistency mechanisms for
restricting data access and movement across cloud sites and availability zones

FR.23 Register and manage persistent security enforcement mechanisms for runtime
monitoring, detecting and labeling of abnormal and intrusive cloud network traffic
behavior


86

D1.1 Stakeholders Requirements Analysis

8 Conclusions
This final section of the current deliverable (D1.1) will be used as a synopsis of the content presented in the
document, which was the outcome of a carefully designed methodology and research upon industrial and
academic data collected during the initial project implementation activities. In the requirements analysis phase,
which this deliverable (D1.1) is part of, a logical process has been followed, using the agile methodology in order
to identify the Unicorn stakeholders and target audience, derive a complete set of Unicorn Actors and define
the Unicorn system requirements. The steps of this process involved active contribution by all partners and the
results of this analysis provide the pillars on which the technical and research work, that will follow (D1.2 Unicorn
reference architecture), will be based.

The first step of this process was to identify the main Unicorn stakeholders and the target audience. Chapter 5
of this deliverable (D1.1) depicts the full image of the ones that the final result of Unicorn Project aims at.
Moreover, by analysing the current state of the industry, the market gaps that the Unicorn project will contribute
to have been identified. Another contribution of D1.1 was the definition of common terminology/glossary
presented in Chapter 3 that will be used as a reference guide across all future deliverables and interaction with
Unicorn stakeholders. The final outcome of the first step of the methodology was the identification of the user
roles for the Unicorn eco-system. Some of the user role responsibilities may overlap among users of the
platform, which may cause misinterpretations, however as the analysis of the interview results suggests in the
next step, in DevOps teams, the silver lining between roles in the engineering team are often quite blur (e.g., a
Cloud Application Developer may also be in charge of Testing or the Application Administrator may also be a
Developer).

The next step of the logical process was the development of the interview questionnaire for potential Unicorn
target users and the analysis of the responses which produced results that were in accordance to all major
industry surveys of the field. The analysis of the responses contributed in deciding and clarifying a set of
functional and non-functional system requirements that can be assigned to the identified user roles (Chapter 7).
In addition, the interview results have highlighted the main obstacles and difficulties that IT workers in SMEs are
currently facing on the cloud environment, such as lack of unified tools for monitoring and elasticity, the
deployment of application over multi-cloud environments and cloud cluster management. Another interesting
finding from the interview process was the prioritization and ranking of the various security threats and privacy
issues that SMEs are facing. This ranking of the security and privacy threats contributed in deciding the core
security functionality that Unicorn will offer to its users.

In addition, the interview process also provided valuable information regarding the technologies involved to
realize various aspects of the Unicorn project. Micro-service architectural approaches are typically increasing in
popularity among IT workers in the SMEs (some are experimenting, some are partly using a micro-service
architecture, some have fully embraced the micro-service approach). With the increase in the interest for micro-
services architectural patterns, interviewed organisations also seem to be utilizing containerized solutions (e.g.,
Docker, Swarm, and Kubernetes) for application deployment and orchestration.

In the forthcoming steps, based on the outcomes of D1.1, the documentation of the overall architecture
describing the main components and artefacts of Unicorn, the interconnection scheme and the specific
interfaces for exchange of information among them will be designed and described in detail in D1.2. In addition
to the reference architecture, the supported Unicorn Use Cases describing the implementation scenarios of the


87

D1.1 Stakeholders Requirements Analysis

mechanisms that will be developed within the project in the demonstrators will be analysed in order to be used
as a starting point for the research/technical and demonstration/business -oriented work packages.


88

D1.1 Stakeholders Requirements Analysis

9 References
[1] N. R. Herbst, S. Kounev, and R. Reussner, Elasticity in Cloud Computing: What It Is, and What It Is Not.,
in ICAC, 2013, pp. 2327.

[2] N. Loulloudes, C. Sofokleous, D. Trihinas, M. D. Dikaiakos, and G. Pallis, Enabling Interoperable Cloud
Application Management through an Open Source Ecosystem, {IEEE} Internet Comput., vol. 19, no. 3,
pp. 5459, 2015.

[3] L. Willcocks, W. Venters, and E. A. Whitley, Cloud in Context: Managing New Waves of Power, in Moving
to the Cloud Corporation: How to face the challenges and harness the potential of cloud computing,
London: Palgrave Macmillan UK, 2014, pp. 119.

[4] Intuit Inc., Intuit Study Shows How the Cloud Will Transform Small Business by 2020. 2015.

[5] Michael J. SKok, Breaking Down the Barriers to Cloud Adoption. 2014.

[6] Apache JClouds, https://jclouds.apache.org/. .

[7] Apache LibClouds, https://libcloud.apache.org/. .

[8] OASIS TOSCA Committee, OASIS Topology and Orchestration Specification for Cloud Applications
(TOSCA). .

[9] OASIS CAMP Committee, OASIS Cloud Application Management for Platforms (CAMP). .

[10] Rackspace Inc., State of the Cloud 2016. 2016.

[11] Rightscale Inc., Cloud Computing Trends 2015. 2015.

[12] Julie Knudson, Study: IaaS and Cloud Challenges in the Enterprise. 2014.

[13] D. Trihinas, G. Pallis, and M. D. Dikaiakos, JCatascopia: Monitoring Elastically Adaptive Applications in
the Cloud, in Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium
on, 2014, pp. 226235.

[14] D. Trihinas, G. Pallis and M. D. Dikaiakos, Monitoring Elastically Adaptive Multi-Cloud Services, IEEE
Trans. Cloud Comput., vol. 4, 2016.

[15] G. Copil et al., Service-Oriented Computing: 12th International Conference, ICSOC 2014, Paris, France,
November 3-6, 2014. Proceedings, Berlin, Heidelberg: Springer, 2014, pp. 275290.

[16] Amazon CloudFormation, https://aws.amazon.com/cloudformation/. .

[17] Oracle Virtual Assembly Builder, http://www.oracle.com/us/products/middleware/exalogic/virtual-


assembly-builder/overview/index.html. .

[18] Eclipse IDE Community, Cloud Application Management Framework (CAMF). .

[19] JuJu from Canonical, http://www.ubuntu.com/cloud/juju. .

[20] ServiceMesh Agility Platform, http://www.csc.com/cloud/offerings/53410/104965-


csc_agility_platform_cloud_management. .

[21] S. Dustdar, Y. Guo, B. Satzger, and H.-L. Truong, Principles of elastic processes, IEEE Internet Comput.,
no. 5, pp. 6671, 2011.


89

D1.1 Stakeholders Requirements Analysis

[22] Programmable Infrastructure, programmableinfrastructure.com. 2017.

[23] P. Gouvas, C. Vassilakis, E. Fotopoulou, and A. Zafeiropoulos, A Novel Reconfigurable-by-Design Highly


Distributed Applications Development Paradigm over Programmable Infrastructure, in 2016 28th
International Teletraffic Congress (ITC 28), 2016, vol. 2, pp. 712.

[24] Z. A. Mann, Allocation of Virtual Machines in Cloud Data Centers&Mdash;A Survey of Problem Models
and Optimization Algorithms, ACM Comput. Surv., vol. 48, no. 1, p. 11:1--11:34, Aug. 2015.

[25] Kurt Marko et al., The benefits of a multi-cloud approach. .

[26] Tony Connor, IDC, The benefits of a multi-cloud strategy. 2016.

[27] RightScale, State of the Cloud Report 2017, 2017.

[28] Rightscale, State of the Cloud 2017 Trends. 2017.

[29] D. Tovarnak and T. Pitner, Towards multi-tenant and interoperable monitoring of virtual machines in
cloud, in Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), 2012 14th International
Symposium on, 2012, pp. 436442.

[30] N. Bassiliades, M. Symeonidis, G. Meditskos, E. Kontopoulos, P. Gouvas, and I. Vlahavas, A Semantic


Recommendation Algorithm for the PaaSport Platform-as-a-service Marketplace, Expert Syst. Appl., vol.
67, no. C, pp. 203227, Jan. 2017.

[31] G. Copil et al., ADVISEa framework for evaluating cloud service elasticity behavior, in Service-Oriented
Computing, Springer, 2014, pp. 275290.

[32] J. Thones, Microservices, IEEE Softw., vol. 32, no. 1, p. 116, Jan. 2015.

[33] Lori MacVittie, Micorservices and Microsegmentation,


https://devcentral.f5.com/articles/microservices-versus-microsegmentation. 2015.

[34] Martin Fowler, Microservices a definition of this new architectural term. [Online]. Available:
https://martinfowler.com/articles/microservices.html.

[35] Eric S. Raymond, The Art of UNIX Programming. 2013.

[36] Scott M. Fulton, What Led Amazon to its Own Microservices Architecture. 2015.

[37] Tony Mauro, Adopting Microservices at Netflix: Lessons for Architectural Design. 2016.

[38] M. G. Xavier, M. V Neves, F. D. Rossi, T. C. Ferreto, T. Lange, and C. A. F. De Rose, Performance Evaluation
of Container-Based Virtualization for High Performance Computing Environments, in 2013 21st
Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, 2013, pp.
233240.

[39] R. Jain and S. Paul, Network virtualization and software defined networking for cloud computing: a
survey, IEEE Commun. Mag., vol. 51, no. 11, pp. 2431, Nov. 2013.

[40] J. Sahoo, S. Mohapatra, and R. Lath, Virtualization: A Survey on Concepts, Taxonomy and Associated
Security Issues, in 2010 Second International Conference on Computer and Network Technology, 2010,
pp. 222226.

[41] Xen Project, http://www.xenproject.org/. .


90

D1.1 Stakeholders Requirements Analysis

[42] VMWare VSphere Hypervisor, http://www.vmware.com/products/vsphere-hypervisor.html. .

[43] KVM Hypervisor, https://www.linux-kvm.org/page/Main_Page. .

[44] E. Bauman, G. Ayoade, and Z. Lin, A Survey on Hypervisor-Based Monitoring: Approaches, Applications,
and Evolutions, ACM Comput. Surv., vol. 48, no. 1, p. 10:1--10:33, Aug. 2015.

[45] R. Dua, A. R. Raja, and D. Kakadia, Virtualization vs Containerization to Support PaaS, in 2014 IEEE
International Conference on Cloud Engineering, 2014, pp. 610614.

[46] Nolle et al., Continuous integration and deployment with containers. 2015.

[47] Chris Tozzi et al., The benefits of container development. 2015.

[48] E. W. Biederman and L. Networx, Multiple instances of the global linux namespaces, in Proceedings of
the Linux Symposium, 2006, vol. 1, pp. 101112.

[49] P. Menage et al., C-Groups. 2006.

[50] LXC/LXD Linux Containers, https://linuxcontainers.org/. .

[51] J. Turnbull, The Docker Book: Containerization is the new virtualization. James Turnbull, 2014.

[52] Docker vs CoreOS Rkt, https://www.upguard.com/articles/docker-vs-coreos. .

[53] CoreOs, http://coreos.com/.

[54] Docker Inc., Docker Compose. .

[55] Kubernetes, http://kubernetes.io/. .

[56] Fleet, https://github.com/coreos/fleet. .

[57] Xen Project, The Unikernel Approach. 2014.

[58] A. Kivity, D. Laor, G. Costa, and P. Enberg, OSvOptimizing the Operating System for Virtual Machines,
Proc. 2014 USENIX Annu. Tech. Conf., pp. 6172, 2014.

[59] MirageOS, https://mirage.io/. .

[60] OSv, http://osv.io/. .

[61] Lars Kurth, Are Cloud Operating Systems the Next Big Thing? .

[62] Lars Kurth, How Early Adopters Are Using Unikernels - With and Without Containers. .

[63] DZone, The DZone Guide to DevOps - Continuous Delivery and Automation, 2016.

[64] R. WEXLER, the State of Cloud report, Weather, vol. 27, no. 5, pp. 211211, 2017.

[65] AWS, What is DevOps?, https://aws.amazon.com/devops/what-is-devops/.

[66] A. Brown, N. Forsgren, J. Humble, G. Kim, and N. Kersten, State of Devops Report 2016, vol. 5, 2016.

[67] M. Fowler, Continuous Integration, 2006.

[68] L. Chen, Continuous delivery: Huge benefits, but challenges too, IEEE Softw., vol. 32, no. 2, pp. 5054,


91

D1.1 Stakeholders Requirements Analysis

2015.

[69] Stackoverflow Community, Developmer Report 2016. .

[70] Eclipse Che Cloud IDE, https://eclipse.org/che. .

[71] SAP Hana Cloud IDE, https://hcp.sap.com/index.html. .

[72] G. Galante and L. C. E. De Bona, A survey on cloud computing elasticity, in Proceedings - 2012 IEEE/ACM
5th International Conference on Utility and Cloud Computing, UCC 2012, 2012, pp. 263270.

[73] M. Nosal, M. Sulir, and J. Juhar, Source code annotations as formal languages, in 2015 Federated
Conference on Computer Science and Information Systems (FedCSIS), 2015, pp. 953964.

[74] Y. Golecha, DZone, How Do Annotations Work in Java? .

[75] Spring IO Tools, https://spring.io/tools. .

[76] Annotation Processing Tool (APT), http://docs.oracle.com/javase/7/docs/technotes/guides/apt/. .

[77] XDoclet Annotations, http://xdoclet.sourceforge.net/xdoclet/index.html. .

[78] Eclipse AspectJ, https://eclipse.org/aspectj/. .

[79] JUnit Testing, http://junit.org/junit4/. .

[80] N. Jacob and C. Brodley, Offloading IDS Computation to the GPU, in 2006 22nd Annual Computer
Security Applications Conference (ACSAC06), 2006, pp. 371380.

[81] L. Marziale, G. G. Richard III, and V. Roussev, Massive Threading: Using GPUs to Increase the
Performance of Digital Forensics Tools, Digit. Investig., vol. 4, pp. 7381, Sep. 2007.

[82] G. Vasiliadis, S. Antonatos, M. Polychronakis, E. P. Markatos, and S. Ioannidis, Gnort: High Performance
Network Intrusion Detection Using Graphics Processors, in Proceedings of the 11th International
Symposium on Recent Advances in Intrusion Detection, 2008, pp. 116134.

[83] G. Vasiliadis, M. Polychronakis, and S. Ioannidis, MIDeA: A Multi-parallel Intrusion Detection


Architecture, in Proceedings of the 18th ACM Conference on Computer and Communications Security,
2011, pp. 297308.

[84] N. Fips, Announcing the ADVANCED ENCRYPTION STANDARD ( AES ), Byte, vol. 2009, no. 12, pp. 812,
2001.

[85] R. L. Rivest, A. Shamir, and L. Adleman, A method for obtaining digital signatures and public-key
cryptosystems, Commun. ACM, vol. 21, no. 2, pp. 120126, 1978.

[86] Kent Beck et al., The Agile Manifesto. 2001.

[87] RightScale 2016 State of the Cloud Report, http://www.rightscale.com/lp/2016-state-of-the-cloud-


report. .

[88] Magic Quadrant for Cloud Infrastructure as a Service, Worldwide,


https://www.gartner.com/doc/reprints?id=1-2G2O5FC&ct=150519. .

[89] Magic Quadrant for Enterprise Application Platform as a Service, Worldwide,


92

D1.1 Stakeholders Requirements Analysis

https://www.gartner.com/doc/reprints?id=1-2C8JHBP&ct=150325&st=sb. .

[90] Veracode Secure Development Survey 2016, https://info.veracode.com/report-veracode-developer-


survey.html. .

[91] VisionMobile 2017: State of the developer nation, https://www.visionmobile.com/reports/state-


developer-nation-q1-2017. .

[92] LightBend 2016: Cloud, Container & Micro-services, https://www.slideshare.net/Lightbend/enterprise-


development-trends-2016-cloud-container-and-microservices-insights-from-2100-jvm-developers. .

[93] GitLab: 2016 Global Developer Report, https://about.gitlab.com/2016/11/02/global-developer-survey-


2016/. .

[94] RebelLabs: 2016 Development and Productivity Report and Java Landscape,
http://pages.zeroturnaround.com/RebelLabs-Developer-Productivity-Report-2016.html. .

[95] RebelLabs: 2017 Programming the Web Report, https://zeroturnaround.com/webframeworksindex/. .

[96] StackOverflow: 2016 Developer Report, https://insights.stackoverflow.com/survey/2016. .

[97] StackOverflow: 2017 Developer Report, https://insights.stackoverflow.com/survey/2017. .

[98] Eu Commission, Annual report on European SMEs performance 2016,


http://ec.europa.eu/growth/smes/business-friendly-environment/performance-review-2016_en. .

[99] SaaS, PaaS, and IaaS: A security checklist for cloud models - CSO Security Report,
http://www.csoonline.com/article/2126885/cloud-security/saas-paas-and-iaas-a-security-checklist-
for-cloud-models.html. .

[100] Gartner, Gartner Says Worldwide Public Cloud Services Market to Grow 17 Percent in 2016, Gartner
Press Release, 2017. [Online]. Available: http://www.gartner.com/newsroom/id/3616417.

[101] L. Leong, G. Petri, B. Gill, and M. Dorosh, Magic Quadrant for Cloud Infrastructure as a Service,
Worldwide, Gartner Inc., 2016. [Online]. Available: https://www.gartner.com/doc/reprints?id=1-
2G2O5FC&ct=150519.

[102] Gartner, Gartner Says Worldwide Public Cloud Services Market to Grow 18 Percent in 2017, Gartner
Press Release, 2017. [Online]. Available: http://www.gartner.com/newsroom/id/3616417.

[103] KPMG, Journey to the cloud: The creative CIO Agenda, 2017.

[104] G. Leopold, Container Market Pegged at $2.7B by 2020, EnterpiseTech, 2017. [Online]. Available:
https://www.enterprisetech.com/2017/01/10/container-market-pegged-2-7b-2020/.

[105] DevOps & Microservice Ecosystem Market Forecast 2017-2022, Market Analysis, 2017. [Online].
Available: https://www.marketanalysis.com/?p=63.

[106] CloudFoundry, Hope Versus Reality: Containers In 2016. Global Perception Study, 2016.

[107] Netflix, Netflix OSS. [Online]. Available: https://netflix.github.io/.

[108] Docker, https://www.docker.com/.

[109] IncludeOs, http://www.includeos.org/.


93

D1.1 Stakeholders Requirements Analysis

[110] Istio, https://istio.io/.

[111] Linkerd, https://linkerd.io/.

[112] OpenShift, https://openshift.io/.

[113] R. Unikernel, https://github.com/rumpkernel/rumprun.

[114] Rkt, https://coreos.com/rkt.

[115] E. Pekka, A Performance Evaluation of Hypervisor, Unikernel, and Container Network I/O Virtualization,
2016.

[116] C. Tamas, A performance comparison of KVM, Docker and the IncludeOS Unikernel, Master Thesis,
2016.

[117] A. Bratterud, A. A. Walla, H. Haugerud, P. E. Engelstad, and K. Begnum, IncludeOS: A minimal, resource
efficient unikernel for cloud services, in Proceedings - IEEE 7th International Conference on Cloud
Computing Technology and Science, CloudCom 2015, 2016, pp. 250257.

[118] I. Github, https://github.com/istio/istio/issues/369.

[119] Autoletics, Performance Benchmarking and Hotspot Analysis of Linkerd Part 1, 2017. [Online].
Available: https://www.autoletics.com/posts/performance-benchmarking-and-hotspot-analysis-of-
linkerd-part-1.

[120] E. E. Ian Briggs, Matt Day, Yuankai Guo, Peter Marheine, A Performance Evaluation of Unikernels, 2015.

[121] A. Madhavapeddy et al., Unikernels: Library Operating Systems for the Cloud, Proc. eighteenth Int.
Conf. Archit. Support Program. Lang. Oper. Syst. - ASPLOS 13, vol. 48, no. 4, p. 461, 2013.

[122] Performance Test For Unikernels (Rumpkernel And OSv). [Online]. Available:
http://tech.donghao.org/2015/12/23/performance-test-for-unikernels-rumpkernel-and-osv/.

[123] Docker v/s Rkt Benchmarking: Performance Benchmarks. [Online]. Available:


https://shivammaharshi.wordpress.com/2016/08/16/docker-vs-rkt-benchmarking-performance-
benchmarks/.

[124] ISO/IEC 25010:2011, https://www.iso.org/standard/35733.html. .


94

D1.1 Stakeholders Requirements Analysis

10 Annex

10.1 Identified Unicorn Functional Requirements


FR.1 Develop cloud application based on code annotation design libraries and define runtime policies and
constraints
FR.2 Securely register and manage cloud provider credentials
FR.3 Search interface for extracting underlying programmable cloud offerings and capability metadata
descriptions
FR.4 Creation of Unicorn-compliant cloud application deployment assembly
FR.5 Cloud application deployment bootstrapping to a (multi-) cloud execution environment
FR.6 Deployment assembly integrity validation
FR.7 Access application behavior and performance monitoring data
FR.8 Real-Time notification and alerting of security incidents and QoS guarantees
FR.9 Autonomic management of deployed cloud applications and real-time adaptation based on
intelligent decision-making mechanisms
FR.10 Manage the runtime lifecycle of a deployed cloud application
FR.11 Application placement over programmable cloud execution environments
FR.12 Register and manage cloud application owners
FR.13 Manage the core context model
FR.14 Register and Manage enablers interpreting Unicorn code annotations
FR.15 Unified API providing abstraction of resources and capabilities of underlying programmable cloud
execution environments
FR.16 Resource and service (de-)reservation over multi-cloud execution environments
FR.17 Development of code annotation libraries
FR.18 Development of enablers interpreting Unicorn code annotations
FR.19 Register and manage programmable infrastructure and service offerings
FR.20 Monitor cloud offering allocation and consumption
FR.21 QoS advertising and management
FR.22 Register and manage privacy preserving encrypted persistency mechanisms for restricting data
access and movement across cloud sites and availability zones
FR.23 Register and manage persistent security enforcement mechanisms for runtime monitoring,
detecting and labeling of abnormal and intrusive cloud network traffic behavior
FR.24 Automated application source code and underlying cloud resource offering vulnerability
assessment, measurement and policy compliance evaluation

10.2 Disseminated Questionnaire


In what follows is in printable format the Unicorn questionnaire. The online version of the questionnaire is
accessible via the following link: https://goo.gl/forms/a8rH60DmD3qSWXXN2


95

D1.1 Stakeholders Requirements Analysis


96

D1.1 Stakeholders Requirements Analysis



97

D1.1 Stakeholders Requirements Analysis


98

D1.1 Stakeholders Requirements Analysis



99

D1.1 Stakeholders Requirements Analysis



100

D1.1 Stakeholders Requirements Analysis



101

D1.1 Stakeholders Requirements Analysis



102

D1.1 Stakeholders Requirements Analysis



103

D1.1 Stakeholders Requirements Analysis



104

D1.1 Stakeholders Requirements Analysis



105

D1.1 Stakeholders Requirements Analysis


106

D1.1 Stakeholders Requirements Analysis


107

D1.1 Stakeholders Requirements Analysis


108

D1.1 Stakeholders Requirements Analysis



109

You might also like