Professional Documents
Culture Documents
MARKS:100
PART-A (20*1=20)
1. C
2. A
3. C
4. A
5. B
6. B
7. A
8. A
9. A
10.C
11.B
12.B
13.A
14.C
15.B
16.D
17.D
18.A
19.C
20.C
PART-B (5*4=20)
A utility tree begins with the word "utility" as the root node. Utility is an
expression of the overall "goodness" of the system. We then elaborate this root
node by listing the major quality attributes that the system is required to exhibit.
22. PALM
The authors of the Manifesto go on to describe the principles that underlie their
reasoning:
1. Our highest priority is to satisfy the customer through early and continuous
delivery of valuable software.
4. Business people and developers must work together daily throughout the
project.
5. Build projects around motivated individuals. Give them the environment and
support they need, and trust them to get the job done.
10. Simplicity the art of maximizing thev amount of work not done is essential.
24. VFC, BENEFIT AND NORMALIZATION
The consumer in this case is an end user. The consumer uses applications that happen to be
running on a cloud.
The consumer in this case is a developer or system administrator. The platform provides a
variety of services that the consumer may choose to use.
The capability provided to the consumer is to provision processing, storage, networks, and
other fundamental computing resources.
Deployment Models
The various deployment models for the cloud are differentiated by who owns and operates
the cloud. It is possible that a cloud is owned by one party and operated by a different party.
26. EDGE DOMINANT SYSTEMS:
All successful edge-dominant systems, and the organizations that develop and use these
systems, share a common ecosystem structure . This is called a "Metropolis" structure,
byanalogy with a city.
of stakeholders:
Customers and end users, who consume the value produced by the Metropolis
Developers, who write software and key content for the Metropolis
Technical duties:
To address the complexity of this domain, the WebArrow architect and developers found that
they needed to think and work in two different modes at the same time:
Top-down designing and analyzing architectural structures to meet the demanding quality
constraints and fashioning solutions to them To compensate for the difficulty in analyzing
architectural tradeoffs with any precision, the team adopted an agile architecture discipline
combined with a rigorous program of experiments aimed at answering specific tradeoff
questions. These experiments are what are called "spikes" in Agile terminology. And these
experiments proved to be the key in resolving tradeoffs, by helping to tum unknown
architectural parameters into constants or ranges.
Making architecture processes agile does not require a radical re-invention of either Agile
practices or architecture methods. The Web-Arrow team's emphasis on experimentation
Barry Boehm and colleagues have developed the Incremental Commitment Model a hybrid
process model framework that attempts to find the balance between agility and commitment.
This model is based upon the following six principles:
continual adaptation to change, and timely growth of complex systems without waiting for
6. Risk management risk -driven anchor point milestones, which are key to synchronizing
and stabilizing all of this concurrent activity
1 . Functional requirements.
3. Constraints.
2. Stimulus.
3.Environment.
4. Artifact.
5.Response.
6. Response measure.
Generate and test. - Creating the Initial Hypothesis test hypothesis Genarate next
hypothesis - Terminating the Process
3. Master the body of knowledge. One of the most itnportant things a competent
architect must do is master the body of knowledge and retnain up to date on it.
duties that an organization can perform to help improve the success of its
Make the position of architect highly regarded through visibility, reward, and
prestige.
31.A. HDFS
NameNode process for the whole cluster, multiple DataNodes, and potentially
multiple client applications. To explain the function of HDFS, we trace through a
use case. We describe the successful use case for "write." HDFS also has facilities
to handle failure.
For the "write" use case, we will assume that the file has already been opened.
HDFS does not use locking to allow for simultaneous writing by different
processes. Instead, it assumes a single writer that writes until the file is
complete, after which multiple readers can read the file simultaneously. The
application process has two portions: the application code and a client library
specific to HDFS. The application code can write to the client using a standard
(but overloaded) Java I/0 call. The client buffers the information until a block of
64 MB has been collected. Two of the techniques used by HDFS for enhancing
performance are the avoidance of locks and the use of 64-MB blocks as the only
block size supported. No substructure of the blocks is supported by HDFS. The
blocks are undifferentiated byte strings. Any substructure and typing of the
information is managed solely by the application. This is one example of a
phenomenon that we will notice in portions of the cloud: moving application-
specific functionality up the stack as opposed to moving it down the stack to the
infrastructure.
The Metropolis model, is paired with the core/periphery pattern for architecture
for edge-dominant systems. Adopting this duo brings with it changes to the way
that software is developed; in effect, it implies a software development model,
with its implications on tools, processes, activities, roles, and expectations.
2. Crowd management. Policies for crowd management must be aligned with the
organization's strategic goals and must be established early.
3. Core versus periphery. The Metropolis model differentiates the core and
periphery communities, with different tools, processes, activities, roles, and
expectations for each.
5. Focus on architecture. The core architecture is the fabric that holds together a
Metropolis system.
32.A.
DEPLOYMENT MODELS
The various deployment models for the cloud are differentiated by who owns and
operates the cloud. It is possible that a cloud is owned by one party and
operated by a different party, but we will ignore that distinction and assume that
the owner of the cloud also operates the cloud.
There are two basic models and then two additional variants of these. The two
basic models are
solely for applications owned by that organization. The primary purpose of the
organization is not the selling of cloud services.
Public cloud. The cloud infrastructure is made available to the general public or
a large industry group and is owned by an organization selling cloud services.
Multi-tenancy
2. Crowd management.
4. Requirements process.
5. Focus on architecture.
6. Distributed testing.
7. Automated delivery.