Expert answer:Assignment Preparation: Activities include independent student reading, and research.Reference the “Understanding Abstraction and Virtualization” section in Ch. 5, “Understanding Abstraction and Virtualization,” of Cloud Computing Bible and research AMI information online.Create a 1- to 2-page
executive summary and business presentation, with eight to ten slides,
describing the most appropriate use of AMI in implementing the service
you described in Week One Supporting Activity: “Cloud Computing
Service.”Use slide notes to detail why you made the decisions you chose I uploaded the assignment from week two to give you an Idea
week2_personal.doc
chapter_5.docx
Unformatted Attachment Preview
Running Head: EXECUTIVE BRIEFING MEMO FOR TECHNICAL LEADERSHIP.
Executive Briefing Memo for Technical Leadership.
Bruce Nutter
University of Phoenix
Course
12/11/2017
1
EXECUTIVE BRIEFING MEMO FOR TECHNICAL LEADERSHIP
2
Technical Leadership.
Policy Title
Information Security Policy
Responsible Executive
Communications Information Officer
Responsible Department
Department of Information and Technology
Endorsed by
Steering Committee of Data Governance
Effective Date
January 1, 2018, following 1month mandatory training
Last Update
March 26, 2017
I. Memo Overview
This executive briefing memo will provide an overview of how three design principles for cloud
applications will be used by the technical leadership to implement the software-as-a-service.
SaaS supports Cloud Computing Service and allows the users use the internet to schedule their
customers and control the level of equipment across the company in all the service.
II. Three design principles for cloud applications used to implement the software-as-aservice. SaaS
SaaS refers to the use of software-as-a-service, rather than in the traditional format. It has
overtaken the traditional method of purchasing a software and out rightly beginning to use it
upon loading on a computer system as a device by embracing the subscription for the software
services based on the model ordered (Antonopoulos & Gillam, 2017). The place where the
software is hosted in the cloud for accessibility through the internet also determine how the
software service will benefit an entity. The benefits of using SaaS are enormous and cut across
all entities, individuals or organizations.
a. Monitoring Prevents Problems
EXECUTIVE BRIEFING MEMO FOR TECHNICAL LEADERSHIP
3
The management will ensure that the risks associated with the SaaS in cloud services are
properly monitored. SaaS enables transparent and timely monitoring and reporting of any risks
relating to the cloud computing service. It has a well-documented escalation and monitoring
process that allows communication across the cloud service (Yang & Liu, 2013). SaaS will be
integrated to allow security monitoring process, escalation process, cloud risk development and
reporting of all the risks related to the cloud computing service. As a result, SaaS provides
regular escalations and information that originates from the cloud service provider.
The SAAS application will be used to undertake planning, invoicing, accounting,
monitoring, and evaluation, tracking of projects and communication. There are upfront costs that
come with the use of SAAS through the monthly subscriptions that has replaced the one-stop
purchase. In case a user no longer needs the SAAS, then the subscription can be terminated
forthwith.
b. Application Management Automates Administration
The management is obliged to ensure that all the applications of SaaS are managed to
ensure that they are compliant with the administration needs. This means that SaaS will be used
to ensure full automation of all the application and processes in an organization at all times. It
obliges the users to comply with legal, policy, regulation and contractual obligations that come
with the use of the cloud computing service. Its authorization will be used to uphold the integrity
of the cloud service it offers to users (Wilder, 2014). This is the cultural dimension observed by
SaaS that allows the compliance manager put in place all policies, code of conduct and
regulations under periodic review. The management will also ensure that all the necessary
processes are in place for compliance purposes
EXECUTIVE BRIEFING MEMO FOR TECHNICAL LEADERSHIP
4
All the SAAS application will be automatically updated online and the fields saved in the
club as opposed to the computer hard disk. The SaaS will also be made to generate its processing
power from the cloud and comes with ready to use applications for subscriptions (Golden, 2013).
It is beneficial for short time users and can provide additional services and storage even without
extra installation. It has free automatic updates and comes with a cross-service compatibility. No
restrictions on the internet and location of use to access the files in the cloud server. It has the
capacity to enable users to alter applications to suit their needs.
c. Tier-Based Design Increases Efficiency
The SaaS conforms to the design principle of as tier-based design and this obligates the
management to buy and implement all the security features in the cloud. It will be used to
increases the efficiency in cloud computing by ensuring that observes a high level of information
risk and security. Besides, the SaaS has a high capability of facilitating a monitoring and
management process that is of an integral significance to all investments of the SaaS cloud
service (Golden, 2013). Its architecture dimension will be tailored to enable the manager to get
involved in all aspects of the initiative from procurement to the commissioning of the SaaS. The
management is thus responsible for the evaluation of the various vendors and management of
various aspects of the SaaS. They also undertake the review of technology applicable to the SaaS
to ass the design and security features before making decisions on the final investments
EXECUTIVE BRIEFING MEMO FOR TECHNICAL LEADERSHIP
5
References
Antonopoulos, N., & Gillam, L. (2017). Cloud computing: Principles, systems, and applications.
Cham, Switzerland: Springer.
Golden, B. (2013). Service-Oriented and Cloud Computing . Hoboken, N.J: John Wiley & Sons
Inc.
Wilder, B. (2014). Cloud architecture patterns: [develop cloud-native applications]. Beijing:
O’Reilly.
Yang, X., & Liu, L. (2013). Principles, methodologies, and service-oriented approaches for
cloud computing.
Chapter 5
Understanding Abstraction and Virtualization
IN THIS CHAPTER
Understanding how abstraction makes cloud computing possible
Understanding how virtualization creates shared resource pools
Using load balancing to enable large cloud computing applications
Using hypervisors to make virtual machines possible
Discussing system imaging and application portability for the cloud
In this chapter, I discuss different technologies that create shared pools of resources. The key to creating
a pool is to provide an abstraction mechanism so that a logical address can be mapped to a physical
resource. Computers use this technique for placing files on disk drives, and cloud computing networks
use a set of techniques to create virtual servers, virtual storage, virtual networks, and perhaps one day
virtual applications. Abstraction enables the key benefit of cloud computing: shared, ubiquitous access.
In this chapter, you learn about how load balancing can be used to create high performance cloud-based
solutions. Google.com’s network is an example of this approach. Google uses commodity servers to
direct traffic appropriately.
Another technology involves creating virtual hardware systems. An example of this type of approach is
hypervisors that create virtual machine technologies. Several important cloud computing approaches
use a strictly hardware-based approach to abstraction. I describe VMware’s vSphere infrastructure in
some detail, along with some of the unique features and technologies that VMware has developed to
support this type of cloud.
Finally, I describe some approaches to making applications portable. Application portability is a difficult
proposition, and work to make applications portable is in its infancy. Two approaches are described, the
Simple API and AppZero’s Virtual Application Appliance (VAA). VAAs are containers that abstract an
application from the operating system, and they offer the potential to make an application portable
from one platform to another.
Using Virtualization Technologies
The dictionary includes many definitions for the word “cloud.” A cloud can be a mass of water droplets,
gloom, an obscure area, or a mass of similar particles such as dust or smoke. When it comes to cloud
computing, the definition that best fits the context is “a collection of objects that are grouped together.”
It is that act of grouping or creating a resource pool that is what succinctly differentiates cloud
computing from all other types of networked systems.
Not all cloud computing applications combine their resources into pools that can be assigned on
demand to users, but the vast majority of cloud-based systems do. The benefits of pooling resources to
allocate them on demand are so compelling as to make the adoption of these technologies a priority.
Without resource pooling, it is impossible to attain efficient utilization, provide reasonable costs to
users, and proactively react to demand. In this chapter, you learn about the technologies that abstract
physical resources such as processors, memory, disk, and network capacity into virtual resources.
When you use cloud computing, you are accessing pooled resources using a technique called
virtualization. Virtualization assigns a logical name for a physical resource and then provides a pointer to
that physical resource when a request is made. Virtualization provides a means to manage resources
efficiently because the mapping of virtual resources to physical resources can be both dynamic and
facile. Virtualization is dynamic in that the mapping can be assigned based on rapidly changing
conditions, and it is facile because changes to a mapping assignment can be nearly instantaneous.
These are among the different types of virtualization that are characteristic of cloud computing:
Access: A client can request access to a cloud service from any location.
Application: A cloud has multiple application instances and directs requests to an instance based on
conditions.
CPU: Computers can be partitioned into a set of virtual machines with each machine being assigned a
workload. Alternatively, systems can be virtualized through load-balancing technologies.
Storage: Data is stored across storage devices and often replicated for redundancy.
To enable these characteristics, resources must be highly configurable and flexible. You can define the
features in software and hardware that enable this flexibility as conforming to one or more of the
following mobility patterns:
P2V: Physical to Virtual
V2V: Virtual to Virtual
V2P: Virtual to Physical
P2P: Physical to Physical
D2C: Datacenter to Cloud
C2C: Cloud to Cloud
C2D: Cloud to Datacenter
D2D: Datacenter to Datacenter
The techniques used to achieve these different types of virtualization are the subject of this chapter.
According to Gartner (“Server Virtualization: One Path that Leads to Cloud Computing,” by Thomas J.
Bittman, 10/29/2009, Research Note G00171730), virtualization is a key enabler of the first four of five
key attributes of cloud computing:
Service-based: A service-based architecture is where clients are abstracted from service providers
through service interfaces.
Scalable and elastic: Services can be altered to affect capacity and performance on demand.
Shared services: Resources are pooled in order to create greater efficiencies.
Metered usage: Services are billed on a usage basis.
Internet delivery: The services provided by cloud computing are based on Internet protocols and
formats.
Load Balancing and Virtualization
One characteristic of cloud computing is virtualized network access to a service. No matter where you
access the service, you are directed to the available resources. The technology used to distribute service
requests to resources is referred to as load balancing. Load balancing can be implemented in hardware,
as is the case with F5’s BigIP servers, or in software, such as the Apache mod_proxy_balancer extension,
the Pound load balancer and reverse proxy software, and the Squid proxy and cache daemon. Load
balancing is an optimization technique; it can be used to increase utilization and throughput, lower
latency, reduce response time, and avoid system overload.
The following network resources can be load balanced:
Network interfaces and services such as DNS, FTP, and HTTP
Connections through intelligent switches
Processing through computer system assignment
Storage resources
Access to application instances
Without load balancing, cloud computing would very difficult to manage. Load balancing provides the
necessary redundancy to make an intrinsically unreliable system reliable through managed redirection.
It also provides fault tolerance when coupled with a failover mechanism. Load balancing is nearly always
a feature of server farms and computer clusters and for high availability applications.
A load-balancing system can use different mechanisms to assign service direction. In the simplest loadbalancing mechanisms, the load balancer listens to a network port for service requests. When a request
from a client or service requester arrives, the load balancer uses a scheduling algorithm to assign where
the request is sent. Typical scheduling algorithms in use today are round robin and weighted round
robin, fastest response time, least connections and weighted least connections, and custom assignments
based on other factors.
A session ticket is created by the load balancer so that subsequent related traffic from the client that is
part of that session can be properly routed to the same resource. Without this session record or
persistence, a load balancer would not be able to correctly failover a request from one resource to
another. Persistence can be enforced using session data stored in a database and replicated across
multiple load balancers. Other methods can use the client’s browser to store a client-side cookie or
through the use of a rewrite engine that modifies the URL. Of all these methods, a session cookie stored
on the client has the least amount of overhead for a load balancer because it allows the load balancer an
independent selection of resources.
The algorithm can be based on a simple round robin system where the next system in a list of systems
gets the request. Round robin DNS is a common application, where IP addresses are assigned out of a
pool of available IP addresses. Google uses round robin DNS, as described in the next section.
Advanced load balancing
The more sophisticated load balancers are workload managers. They determine the current utilization of
the resources in their pool, the response time, the work queue length, connection latency and capacity,
and other factors in order to assign tasks to each resource. Among the features you find in load
balancers are polling resources for their health, the ability to bring standby servers online (priority
activation), workload weighting based on a resource’s capacity (asymmetric loading), HTTP traffic
compression, TCP offload and buffering, security and authentication, and packet shaping using content
filtering and priority queuing.
An Application Delivery Controller (ADC) is a combination load balancer and application server that is a
server placed between a firewall or router and a server farm providing Web services. An Application
Delivery Controller is assigned a virtual IP address (VIP) that it maps to a pool of servers based on
application specific criteria. An ADC is a combination network and application layer device. You also may
come across ADCs referred to as a content switch, multilayer switch, or Web switch.
These vendors, among others, sell ADC systems:
A10 Networks (http://www.a10networks.com/)
Barracuda Networks (http://www.barracudanetworks.com/)
Brocade Communication Systems (http://www.brocade.com/)
Cisco Systems (http://www.cisco.com/)
Citrix Systems (http://www.citrix.com/)
F5 Networks (http://www.f5.com/)
Nortel Networks (http://www.nortel.com/)
Coyote Point Systems (http://www.coyotepoint.com/)
Radware (http://www.radware.com/)
An ADC is considered to be an advanced version of a load balancer as it not only can provide the
features described in the previous paragraph, but it conditions content in order to lower the workload of
the Web servers. Services provided by an ADC include data compression, content caching, server health
monitoring, security, SSL offload and advanced routing based on current conditions. An ADC is
considered to be an application accelerator, and the current products in this area are usually focused on
two areas of technology: network optimization, and an application or framework optimization. For
example, you may find ADC’s that are tuned to accelerate ASP.NET or AJAX applications.
An architectural layer containing ADCs is described as an Application Delivery Network (ADN), and is
considered to provide WAN optimization services. Often an ADN is comprised of a pair of redundant
ADCs. The purpose of an ADN is to distribute content to resources based on application specific criteria.
ADN provide a caching mechanism to reduce traffic, traffic prioritization and optimization, and other
techniques. ADN began to be deployed on Content Delivery Networks (CDN) in the late 1990s, where it
added the ability to optimize applications (application fluency) to those networks. Most of the ADC
vendors offer commercial ADN solutions.
In addition to the ADC vendors in the list above, these are additional ADN vendors, among others:
Akamai Technologies (http://www.akamai.com/)
Blue Coat Systems (http://www.bluecoat.com/)
CDNetworks (http://www.cdnetworks.com/)
Crescendo Networks (http://www.crescendonetworks.com/)
Expand Networks (http://www.expand.com/)
Juniper Networks (http://www.juniper.net/)
Google’s cloud is a good example of the use of load balancing, so in the next section let’s consider how
Google handles the many requests that they get on a daily basis.
The Google cloud
According to the Web site tracking firm Alexa (http://www.alexa.com/topsites), Google is the single
most heavily visited site on the Internet; that is, Google gets the most hits. The investment Google has
made in infrastructure is enormous, and the Google cloud is one of the largest in use today. It is
estimated that Google runs over a million servers worldwide, processes a billion search requests, and
generates twenty petabytes of data per day.
Google is understandably reticent to disclose much about its network, because it believes that its
infrastructure, system response, and low latency are key to the company’s success. Google never gives
datacenter tours to journalists, doesn’t disclose where its datacenters are located, and obfuscates the
locations of its datacenters by wrapping them in a corporate veil. Thus, the discretely named Tetra LLC
(limited liability company) owns the land for the Council Bluffs, Iowa, site, and Lapis LLC owns the land
for the Lenoir, North Carolina, site. This makes Google infrastructure watching something akin to a sport
to many people.
So what follows is what we think we know about Google’s infrastructure and the basic idea behind how
Google distributes its traffic by pooling IP addresses and performing several layers of load balancing.
Google has many datacenters around the world. As of March 2008, Rich Miller of
DataCenterKnowledge.com wrote that Google had at least 12 major installations in the United States
and many more around the world. Google supports over 30 country specific versions of the Google
index, and each localization is supported by one or more datacenters. For example, Paris, London,
Moscow, Sao Paolo, Tokyo, Toronto, Hong Kong, Beijing and others support their countries’ locale.
Germany has three centers in Berlin, Frankfurt, and Munich; the Netherlands has two at Groningen and
Eemshaven. The countries with multiple datacenters store index replicas and support network peering
relationships. Network peering helps Google have low latency connections to large Internet hubs run by
different network providers.
You can find a list of sites as of 2008 from Miller’s FAQ at
http://www.datacenterknowledge.com/archives/2008/03/27/g …
Purchase answer to see full
attachment
You will get a plagiarism-free paper and you can get an originality report upon request.
All the personal information is confidential and we have 100% safe payment methods. We also guarantee good grades
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more