QoS Ranking Prediction Based on Past Service Usage Experience in Cloud Services

Published on January 2017 | Categories: Documents | Downloads: 17 | Comments: 0 | Views: 380
of 6
Download PDF   Embed   Report

Comments

Content

QoS Ranking Prediction based on past service usage experience in cloud services ABSTRACT
Cloud computing is becoming popular. Building high-quality cloud applications is a critical research problem. Quality of Service (QoS) plays a critical role in the affective reservation of resources within service oriented distributed systems and has been widely investigated in the Cloud Computing. The aim of this paper is to address QoS specifically in the context of the nascent paradigm Cloud Computing and propose relevant research questions. QoS rankings provide valuable information for making optimal cloud service selection from a set of functionally equivalent service candidates. To obtain QoS values, real-world invocations on the service candidates are usually required. To avoid the time-consuming and expensive real-world service invocations, this paper proposes a QoS ranking prediction framework for cloud services by taking advantage of the past service usage experiences of other consumers. The QoS Measures (Delay, Throughput, Loss, Cost) depend on offered traffic, and possibly other external processes. Keywords: Quality-of-service, cloud service, ranking prediction, personalization

Introduction:
Quality of Service (QoS) is a broad topic in Distributed Systems and is most often referred to as the resource reservation control mechanisms in place to guarantee a certain level of performance and availability of a service. The scope of this paper is primarily concerned with the management and performance of resources such as processors memory, storage and networks in Cloud Computing. A defined QoS is not just limited to guarantees of performance and availability and can cover other aspects of service quality, which are outside the scope of this paper, such as security and dependability. The problems surrounding resource reservation are non-trivial for all but the most basic best effort guarantees and the problems behind resource capacity planning are non-deterministic polynomial-time hard to solve. QoS provides a level of assurance that the resource requirements of an application are strictly supported. QoS models are associated with End-Users and Providers (and often Brokers), involve resource capacity planning via the use of schedulers and load balancers and utilize Service Level Agreements (SLA). SLAs provide a facility to agree upon QoS between an EndUser and Provider and define End-User resource requirements and Provider guarantees, thus assuring an End-User that they are receiving the services they have payed for.

Objective of the project:
• • • To predict the quality of service ranking prediction of the cloud services. To ensure Quality of service to end users For some application, availability of resources and isolation

Literature Survey
M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R.H. Katz, A.Konwinski, G. Lee, D.A. Patterson, A. Rabkin, I. Stoica, and M.Zaharia, “Above the Clouds: A Berkeley View of Cloud Computing,” Technical Report EECS-2009-28, Univ. California, Berkeley,2009. Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about over provisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or under provisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing. 1. The illusion of infinite computing resources available on demand, thereby eliminating the need for Cloud Computing users to plan far ahead for provisioning. 2. The elimination of an up-front commitment by Cloud users, thereby allowing companies to start small and increase hardware resources only when there is an increase in their needs. 3. The ability to pay for use of computing resources on a short-term basis as needed (e.g., processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful.

A Network and Device Aware QoS Approach for Cloud-Based Mobile Streaming
Chin-FengLai Inst. of Comput. Sci. & Inf. Eng., Nat. Honggang Wang ; Han-Chieh Chao ; Guofang Nan ILan Univ., Ilan, Taiwan

Cloud multimedia services provide an efficient, flexible, and scalable data processing method and offer a solution for the user demands of high quality and diversified multimedia. As intelligent mobile phones and wireless networks become more and more popular, network services for users are no longer limited to the home. Multimedia information can be obtained easily using mobile devices, allowing users to enjoy ubiquitous network services. Considering the limited bandwidth available for mobile streaming and different device requirements, this study presented a network and device-aware Quality of Service (QoS) approach that provides multimedia data suitable for a terminal unit environment via interactive mobile streaming services, further considering the overall network environment and adjusting the interactive transmission frequency and the dynamic multimedia transcoding, to avoid the waste of bandwidth and terminal power. Finally, this study realized a prototype of this architecture to validate the feasibility of the proposed method. According to the experiment, this method could provide efficient self-adaptive multimedia streaming services for varying bandwidth environments.

W.W. Cohen, R.E. Schapire, and Y. Singer, “Learning to order things,” J. Artificial Intelligent Research, vol. 10, no. 1, pp. 243-270,1999.
There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order instances given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a binary preference function indicating whether it is advisable to rank one instance before another. Here we consider an on-line algorithm for learning preference functions that is based on Freund and Schapire's 'Hedge' algorithm. In the second stage, new instances are ordered so as to maximize agreement with the learned preference function. We show that the problem of finding the ordering that agrees best with a learned preference function is NP-complete. Nevertheless, we describe simple greedy algorithms that are guaranteed to find a good approximation. Finally, we show how metasearch can be formulated as an ordering problem, and present experimental results on learning a combination of 'search experts', each of which is a domain-specific query expansion strategy for a web search engine.

M. Deshpande and G. Karypis, “Item-Based Top-n Recommendation,” ACM Trans. Information System, vol. 22, no. 1, pp. 143-177,2004.
The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems—a personalized information filtering technology used to identify a set of items that will be of interest to a certain user. User-based collaborative filtering is the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. Unfortunately, the computational complexity of these methods grows linearly with the number of customers, which in typical commercial applications can be several millions. To address these scalability concerns model-based recommendation techniques have been developed. These techniques analyze the user–item matrix to discover relations between the different items and use these relations to compute the list of recommendations. In this article, we present one such class of model-based recommendation algorithms that first determines the similarities between the various items and then uses them to identify the set of items to be recommended. The key steps in this class of algorithms are (i) the method used to compute the similarity between the items, and (ii) the method used to combine these similarities in order to compute the similarity between a basket of items and a candidate recommender item. Our experimental evaluation on eight real datasets shows that these item-based algorithms are up to two orders of magnitude faster than the traditional user-neighborhood based recommender systems and provide recommendations with comparable or better quality.

A. Iosup, S. Ostermann, N. Yigitbasi, R. Prodan, T. Fahringer, and D. Epema, “Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing,” IEEE Trans. Parallel Distributed System, vol. 22, no. 6, pp. 931-945, June 2011.
Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a large user base with different needs. Thus, clouds have the potential to provide to their owners the benefits of an economy of scale and, at the same time, become an alternative for scientists to clusters, grids, and parallel production environments. However, the current commercial clouds have been built to support web and small database workloads, which are very different from typical scientific computing workloads. Moreover, the use of virtualization and resource time sharing may introduce significant performance penalties for the demanding scientific computing workloads. In this work, we analyze the performance of cloud computing services for scientific computing workloads. We quantify the presence in real scientific computing workloads of Many-Task Computing (MTC) users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific goals. Then, we perform an empirical evaluation of the performance of four commercial cloud computing services including Amazon EC2, which is currently the largest commercial cloud. Last, we compare through trace-based simulation the performance characteristics and cost models of clouds and other scientific computing platforms, for general and MTC-based scientific

computing workloads. Our results indicate that the current clouds need an order of magnitude in performance improvement to be useful to the scientific community, and show which improvements should be considered first to address this discrepancy between offer and demand.

Architecture

Expected Results:
Our framework is mainly designed for cloud applications, because: 1) client-side QoS values of different users can be easily obtained in the cloud environment; and 2) there are a lot of redundant services abundantly available in the cloud, QoS ranking of candidate services becomes important when building cloud applications. The framework can also be extended to other component-based applications, in case that the components are used by a number of users, and the past usage experiences of different users can be obtained.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close