Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Dual Framework and Algorithms for Targeted Online Data Delivery
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Dual Framework and Algorithms for Targeted Online Data Delivery
[attachment=28735]
Abstract:
A variety of emerging online data delivery applications challenge existing techniques for data delivery to human users, applications, or middleware that are accessing data from multiple autonomous servers. In this project, we develop a framework for formalizing and comparing pull-based solutions and present dual optimization approaches. The first approach, most commonly used nowadays, maximizes user utility under the strict setting of meeting a priori constraints on the usage of system resources. We present an alternative and more flexible approach that maximizes user utility by satisfying all users. It does this while minimizing the usage of system resources. We discuss the benefits of this latter approach and develop an adaptive monitoring solution Satisfy User Profiles (SUPs). Through formal analysis, we identify sufficient optimality conditions for SUP. Using real (RSS feeds) and synthetic traces, we empirically analyze the behavior of SUP under varying conditions. The proposed framework aims at providing a scalable online data delivery solution. We identify three types of entities, namely servers, clients, and brokers. We propose a dual formulation OptMon2, which reverses the roles of user utility and system constraints, setting the fulfillment of user needs as the hard constraint. OptMon2 assumes that the system resources that will be consumed to satisfy user profiles should be determined by the specific profiles and the environment, e.g., the model of updates, and does not assume an a priori limitation of system resources.
Existing System:
In the existing framework is to mismatch between a simple profile that a server supports via push and the more complex profile of a decision making agent.
A complex profile involving multiple servers may also require pull based resource monitoring, e.g., a profile that check a change in a stock price soon after a financial report is released cannot be supported by push from a single server.
A decision agent may also not wish to reveal her profile for privacy or other considerations.
The use of profiles could lower the load on RSS servers by accessing them only to satisfy a user profile.
Much of the existing research in pull-based data delivery casts the problem of data delivery as follows: Given some set of limited system resources, maximize the utility of a set of user profiles.
Proposed System:
We proposed a dual framework based targeted online data delivery.
Our proposed method Allow to negotiating over the dynamic nature of resources and using time based constraints.(frame change)
The proposed framework aims at providing a scalable online data delivery solution. We identify three types of entities, namely servers, clients, and brokers.
We propose a dual formulation OptMon2, which reverses the roles of user utility and system constraints, setting the fulfillment of user needs as the hard constraint. (profile matching)
OptMon2 assumes that the system resources that will be consumed to satisfy user profiles should be determined by the specific profiles and the environment, e.g., the model of updates, and does not assume an a priori limitation of system resources.( size matching)
We propose to implement SUP in Dot Net Framework 2.0, and experimented with it on various trace data sets, profiles, life parameters, and update models. Traces of update events include real RSS feed traces and synthetic traces.
We propose an optimal static algorithm SUP for the dual problem.
Under some conditions, SUP is even optimal for both objectives!
We further present adaptive versions of SUP, fbSUP and fbSUP(λ), that handle non-static situations using feedback.
Overall, results show that the dual approach is capable to dominate the traditional approach and has good utility/budget performance in the non-static case.
Literature survey:
Monitoring can be done using one of three methods, namely push-based, pull-based, or hybrid. With push-based monitoring the server pushes updates to clients, providing guarantees with respect to data freshness at a possibly considerable overhead at the server. With pull-based monitoring, content is delivered upon request, reducing overhead at servers, with limited effectiveness in estimating object freshness. The hybrid approach combined push and pull, either based on resource constraints [6] or role definition. For the latter, consider the user profile language we have presented. Here, it is possible that servers of trigger classes will push data to clients, while data regarding query classes will be monitored by pulling content from servers once a notification rule is satisfied.
As another example for the hybrid approach, consider three-layer architecture, in which a mediator is positioned between clients and servers. The mediator can monitor servers by periodically pulling their content, and determine when to push data to clients based on their content delivery profiles.
Use of RSS Feeds:
RSS is a free service offered by CNN for non-commercial use. Any other uses, including without limitation the incorporation of advertising into or the placement of advertising associated with or targeted towards the RSS Content, are strictly prohibited. You must use the RSS feeds as provided by CNN, and you may not edit or modify the text, content or links supplied by CNN. For web posting,
Link to Content Pages:
The RSS service may be used only with those platforms from which a functional link is made available that, when accessed, takes the viewer directly to the display of the full article on the CNN Site. You may not display the RSS Content in a manner that does not permit successful linking to, redirection to or delivery of the applicable CNN Site web page. You may not insert any intermediate page, splash page or other content between the RSS link and the applicable CNN Site web page.
Ownership/Attribution:
CNN retains all ownership and other rights in the RSS Content, and any and all CNN logos and trademarks used in connection with the RSS Service. You must provide attribution to the appropriate CNN website in connection with your use of the RSS feeds. If you provide this attribution using a graphic, you must use the appropriate CNN website's logo that we have incorporated into the RSS feed.
The central resource-performance tradeoff in a publish-subscribe system in which publishers serve content only when polled involves bandwidth versus update latency. Clearly, polling data sources more frequently will enable the system to detect and disseminate updates earlier. Yet polling every data source constantly would place a large burden on publishers, congest the network, and potentially run afoul of server-imposed polling limits that would ban the system from monitoring the micro news feed or Web page. The goal of Corona, then, is to maximize the effective benefit of the aggregate bandwidth available to the system while remaining within server-imposed bandwidth limits.
A High Performance Publish-Subscribe System for the World Wide Web Venugopalan Rama Subramanian Ryan Peterson Cornell University, Ithaca, NY . In this work we consider a middleware setting managed by a proxy that delivers notifications to multiple clients according to their data delivery requirements, specified in the form of client profiles. A client profile identifies a set of resources of interest with which the client requires to be synchronized and a set of events (e.g., updates to some resource) that identify when such synchronization should take place. Each event is associated with a deadline in getting notification about the event occurrence and we further assume that clients prefer to get notifications as soon as possible to minimize their observed delay in the system. The time frame associated with each resource and each such event is termed an Execution Interval, and requires from the proxy to deliver a notification to the client during that interval. A client profile may consist of multiple such intervals.[4]
The proxy dilemma identifies a tradeoff between completeness and currency when its two objective functions are dependent. Maximizing completeness may decrease currency and vice versa. Such a tradeoff between objectives of general multi objective optimization problems is well studied (e.g., [8] and many references therein). The conventional definition of a solution to multi objective problems includes a set of non-dominated feasible solutions, also known as the Pareto Set (or its geometric representation in the form of a Pareto curve). Non dominance between two schedules means that each schedule has better performance than other schedules with regard to at least one of the objectives. Thus, the Pareto set contains schedules that dominate all other schedules that do not belong to the set. Within the set there is no preference between the schedules. The Pareto set identifies the data delivery optimal tradeoff.[4]
The offline algorithm assumes knowledge of the entire sequence in advance, and this assumption might not be true in some applications. Online algorithms do not have such requirements. They, however, are designed to optimize for worst-case sequences, which occur infrequently in real-world applications, and so are considered pessimistic. Any algorithm that assumes that the sequence will not turn out to be one of the worst-case ones, can make “risky” decisions that are better than the online algorithm if the sequence is not worst-case. Such an algorithm can perform better on practical or realistic input sequences than the online algorithm. We now present heuristics that are designed to be adaptive.
In the following heuristics, we take into consideration the most recent history. We define a window of size W, for which we maintain some statistics that help us decide whether we should pre-fetch some data. Whenever a contact with the server is issued, we need to consider pre-fetching of other data items. This pre-fetching (and caching) can save us additional contacts to the server later on.[5]

Guest

please send literature survey on specified title