`

为方便阅读而帖 Balancing Push and Pull for Data Broadcast

阅读更多
Balancing Push and Pull for Data Broadcast

Swarup Acharya
Brown University
sa@cs.brown.edu

Michael Franklin
University of Maryland
franklin@cs.umd.edu

Stanley Zdonik
Brown University
sbz@cs.brown.edu

...

Data Volume — A third way that asymmetry arises is due to the volume of data that is transmitted in each direction. Informa- tion retrieval applications typically involve a small request mes- sage containing a few query terms or a URL (i.e., the “mouse and key clicks”), and result in the transfer of a much larger object or set of objects in response. For such applications, the downstream bandwidth requirements of each client are much higher than the upstream bandwidth requirements.
Updates and New Information — Finally, asymmetry can also arise in an environment where newly created items or updates to existing data items must be disseminated to clients. In such cases, there is a natural (asymmetric) flow of data in the downstream di- rection.
From the above list, it should be clear that asymmetry can arise not only due to the properties of the communications network, but can arise even in a “symmetric” environment due to the nature of the data flow in the application. Thus, dissemination-based appli- cations such as those for which the Broadcast Disks approach is intended will be asymmetric irrespective of the addition of a client backchannel.
Asymmetry imposes constraints on the behavior of clients and servers in a networked application. For example, in a system with a high client-to-server ratio, clients must limit their interactions with the server to a level which the server and backchannel can han- dle. World-Wide-Web (WWW) and File Transfer Protocol (FTP) servers deal with this problem by limiting the number of connec- tions they are willing to accept at a given time. Such a limitation can result in delays when accessing popular sites, such as a site containing a new release of a popular software package or one con- taining up-to-date information on elections, sporting events, stocks, etc.

1.3 push & pull

In previous work we have proposed and studied Broadcast Disks as a means for coping with communications asymmetry [Zdon94, Acha95b]. This approach addresses broadcast scheduling and client-side storage management policies for the periodic broadcast of data. With Broadcast Disks, a server uses its best knowledge of client data needs (e.g., as provided by client profiles) to construct a broadcast schedule containing the items to be disseminated and repetitively transmits this “broadcast program” to the client popu- lation. Clients monitor the broadcast and retrieve the items they require (i.e., those that may be needed locally but are not currently cached in local storage) as they come by. In such a system, data items are sent from the server to the clients without requiring a specific request from the clients. This approach is referred to as push-based data delivery. The combination of push-based data de- livery and a broadcast medium is well suited for data dissemination due to its inherent scalability. Because clients are largely passive in a push-based system, the performance of any one client receiving data from the broadcast is not directly affected by other clients that are also monitoring the broadcast.

In contrast to a periodic broadcast approach, traditional client- server database systems and object repositories transfer data using a pull-based, request-response style of operation (e.g., RPC). Using request-response, clients explicitly request data items by sending messages to a server. When a data request is received at a server, the server locates the information of interest and returns it to the client. Pull-based access has the advantage of allowing clients to play a more active role in obtaining the data they need, rather than relying solely on the schedule of a push-based server. However, there are two obvious disadvantages to pull-based access. First, clients must be provided with a backchannel over which to send their requests to the server. In some environments, such as wireless networks and CATV, the provision of a backchannel can impose substantial additional expense. Second, the server must be interrupted continu- ously to deal with pull requests and can easily become a scalability bottleneck with large client populations. This latter problem can be mitigated, to some extent, by delivering pulled data items over a broadcast channel and allowing clients to “snoop” on the broadcast to obtain items requested by other clients.

1.4 overview of the paper

In this paper, we extend our previous work on data broadcasting by integrating a pull-based backchannel with the push-based Broad- cast Disks approach. We focus on the tradeoffs in terms of perfor- mance and scalability between the push and pull approaches and investigate ways to efficiently combine the two approaches for both the steady-state and warm-up phases of operation. Client requests are facilitated by providing clients with a backchannel for sending messages to the server. While there are many ways in which the clients could use this capability (e.g., sending feedback and usage profiles), in this study we focus on the use of the backchannel to allow clients to pull pages that are not available in the client cache and that will not appear quickly enough in the broadcast.

The issues we address include: 1) the impact of client requests on steady-state and warm-up performance and the scalability of that performance; 2) the allocation of broadcast bandwidth to pushed and pulled pages; 3) techniques to maximize the benefit of client requests while avoiding server congestion; 4) the sensitivity of the performance to the variance in client access patterns; and 5) the impact of placing only a subset of the database on the Broadcast Disk, thereby forcing clients to pull the rest of the pages.
In this study, we model a system with multiple clients and a single server that controls the broadcast. Clients are provided with a backchannel, but the server has a bounded capacity for accepting requests from clients. In a lightly loaded system, backchannel re- quests are considered to be inexpensive. The limited capacity of the server, however, means that as the system approaches saturation, client requests become more likely to be dropped (i.e., ignored) by the server. Thus, the effectiveness of the backchannel for any one client depends on the level of backchannel activity created by the other clients.
This study adopts several assumptions about the environment:
1. Independence of frontchannel and backchannel. In our model, traffic on the backchannel in no way interferes with the bandwidth capability of the frontchannel. While this is not always the case for network technologies like Ethernet which share the channel, it is true of many others where the uplink and the downlink channels are physically different as in CATV networks and DirecPC.
2. Broadcast program is static. While having dynamic client profiles and therefore, dynamic broadcast programs is inter- esting, we reserve this problem for a future study. We do, however, study both the steady-state performance and the warm-up time for clients.
3. Data is read only. In previous work, we studied techniques for managing volatile data in a Broadcast Disk environment [Acha96b]. We showed that, for moderate update rates, it is possible to approach the performance of the read-only case. Thus, in order to make progress on the problem at hand, we have temporarily ignored that issue here.
The remainder of the paper is structured as follows: In Section 2, we describe the extensions that we made to our previous work in order to combine pull-based data delivery with our existing push- based scheme. In Section 3, we sketch the approach that we took to simulating the combined push-pull environment, and in Section

Figure 1: Example of a 7-page, 3-disk broadcast program

4, we present the performance results. Section 5 discusses related work. Section 6 summarizes the results of the paper.

2. broadcast settings

In this section we briefly sketch the Broadcast Disk approach (for more detail, see [Acha95b]) and present a high-level view of the extensions made to the approach to incorporate a pull-based backchannel.

2.1 broadcast disk

The Broadcast Disk paradigm is based on a cyclic broadcast of pages (or objects) and a corresponding set of client cache manage- ment techniques. In earlier work, we have demonstrated that the layout of the broadcast and the cache management scheme must be designed together in order to achieve the best performance.
Using Broadcast Disks, groups of pages (a.k.a., disks) are as- signed different frequencies depending on their probability of ac- cess. This approach allows the creation of an arbitrarily fine- grained memory hierarchy between the client and the server. Un- like most memory hierarchies whose parameters are determined by hardware, however, the shape of this memory hierarchy can be ad- justed to fit the application using software. Figure 1 shows a simple broadcast program for the seven pages named , , , , , , and . These pages are placed on three disks with relative spinning speeds of 4:2:1. Page is on the fastest disk, pages and are on the medium speed disk, and pages , , , and are on the slowest disk.
The number of pages in one complete cycle (i.e., the period) of the broadcast is termed the major cycle. The example broadcast in Figure 1 has a major cycle length of 12 pages. The algorithm used by the server to generate the broadcast schedule requires the fol- lowing inputs: the number of disks, the relative frequency of each disk and assignments of data items to the disks on which they are to be broadcast. The algorithm is described in detail in [Acha95a].
Our previous work has shown that a cache replacement algo- rithm that is based purely on access probabilities (e.g., LRU) can perform poorly in this environment. Better performance is achieved by using a cost-based replacement algorithm that takes the fre- quency of broadcast into account. One such algorithm that we de- veloped is called . If is the probability of access and is the frequency of broadcast, , ejects the cached page with the low- est value of . Let be the probability of access of page i and let page

// be the broadcast frequency of page i. If , , and , then page will always be ejected before even though its probability of access is higher. Intuitively, //

the value of a page depends not only on the access probability but also on how quickly it arrives on the broadcast.

2.2 integrating a backchannel

In this paper, we introduce a backchannel into the Broadcast Disk environment. We model the backchannel as a point-to-point con- nection with the server. Thus, the rate at which requests arrive at the server can grow proportionally with the number of clients. Note that since request messages are typically small, this assumption is justifiable even in situations where clients share a physical connec- tion with the server. The server, on the other hand, has a maximum rate at which it can send out pages in response to client requests (as described below), and moreover, it has a finite queue in which to hold outstanding requests. If a request arrives when the queue is full, that request is thrown away.
In order to support both push-based and pull-based delivery, the server can interleave pages from the broadcast schedule (i.e., push) with pages that are sent in response to a specific client request (i.e., pull). The percentage of slots dedicated to pulled pages is a pa- rameter that can be varied. We call this parameter pull bandwidth (PullBW). When PullBW is set to 100%, all of the broadcast slots are dedicated to pulled pages. This is a pure-pull system. Con- versely, if PullBW is set to 0%, the system is said to be a pure-push system; all slots are given to the periodic broadcast schedule, and thus, there is no reason for clients to send pull requests.
Setting PullBW to a value between these two extremes al- lows the system to support both push and pull. For example, if PullBW=50%, then at most, one page of pull response is sent for each page of the broadcast schedule. We define PullBW to be an up- per bound on the amount of bandwidth that will be given to pulled pages. In the case in which there are no pending requests we simply continue with the Broadcast Disk schedule, effectively giving the unused pull slots back to the push program. This approach saves bandwidth at the cost of making the layout of the broadcast less predictable.
It is important to note, that unlike the pure-push case, where the activity of clients does not directly affect the performance of other clients, in this environment, overuse of the backchannel by one or more clients can degrade performance for all clients. There are two stages of performance degradation in this model. First, because the rate at which the server can respond to pull requests is bounded, the queue of requests can grow, resulting in additional latency. Second, because the server queue is finite, extreme overuse of the backchan- nel can result in requests being dropped by the server. The inten- sity of client requests that can be tolerated before these stages are reached can be adjusted somewhat by changing the PullBW param- eter.

2.3 algorithms

In the remainder of this paper, we compare pure-push, pure-pull, and an integrated push/pull algorithm, all of which use broad- cast. All three approaches involve client-side as well as server-side mechanisms. In all of these techniques, when a page is needed, the client’s local cache is searched first. Only if there is a cache miss does the client attempt to obtain the page from the server. The approaches vary in how they handle cache misses.
1. Pure-Push. This method of data delivery is the Broadcast Disk mechanism sketched above. Here, all broadcast band- width is dedicated to the periodic broadcast (PullBW=0%)
[1] The server will also ignore a new request for a page that is already in the request queue since the processing of the earlier message will also satisfy this new request.

[2] Predictability may be important for certain environments. For example, in mo- bile networks, predictability of the broadcast can be used to reduce power consump- tion [Imie94b].

and no backchannel is used. On a page miss, clients simply wait for the desired page to appear on the broadcast.

2. Pure-Pull. Here, all broadcast bandwidth is dedicated to pulled pages (PullBW=100%) so there is no periodic broad- cast. On a page miss, clients immediately send a pull re- quest for the page to the server. This is the opposite of Pure- Push, that is, no bandwidth is given to the periodic broad- cast. It is still a broadcast method, though, since any page that is pulled by one client can be accessed on the frontchan- nel by any other client. This approach can be referred to as request/response with snooping.

3. Interleaved Push and Pull ( ). This algorithm mixes both push and pull by allowing clients to send pull requests for misses on the backchannel while the server supports a Broadcast Disk plus interleaved responses to the pulls on the frontchannel. As described previously, the allocation of bandwidth to pushed and pulled pages is determined by the PullBW parameter.

A refinement to uses a fixed threshold to limit the use of the backchannel by any one client. The client sends a pull re- quest for page only if the number of slots before is sched- uled to appear in the periodic broadcast is greater than the thresh- old parameter called ThresPerc. Threshold is expressed as a per- centage of the major cycle length (i.e., the push period). When ThresPerc=0%, the client sends requests for all missed pages to the server. When ThresPerc=100% and the whole database appears in the push schedule, the client sends no requests since all pages will appear within a major cycle. Increasing the threshold has the ef- fect of forcing clients to conserve their use of the backchannel and thus, minimize the load on the server. A client will only pull a page that would otherwise have a very high push latency.

In Section 4, we examine the performance of these different ap- proaches, as well as the impact of parameters such as PullBW and ThresPerc (in the case of ). We also investigate the pull-based policies for cases in which only a subset of the database is broad- cast. In particular, we examine the performance of the system when the slowest and the intermediate disk in the broadcast program are incrementally reduced in size.

3. modeling the broadcast environment

The results presented in this paper were obtained using a de- tailed model of the Broadcast Disks environment, extended to ac- count for a backchannel. The simulator is implemented using CSIM [Schw86]. In this section we focus on extensions to the original model needed to integrate pull-based access. Details of the original model are available in [Acha95a].
The simulation model is shown in Figure 2. The simulated clients (represented as and as described below) access data from the broadcast channel. They filter every request through a cache (if applicable) and through the threshold algorithm (if ap- plicable) before submitting it to the server using the backchannel. The server can broadcast a page from the broadcast schedule or as a response to a queued page request. This choice is indicated by the Push/Pull MUX and is based on the value of PullBW. We de- scribe the client and server models in more detail in the following sections.

3.1 the client model

In our previous studies we modeled only a single client because in the absence of a backchannel, the performance of any single client

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics