In late 1994, Prof. Eric Brewer and I (at UC Berkeley) became involved in the
Berkeley InfoPad project, whose mission was to build a very simple, extremely low-power, picocell-wireless, portable multimedia terminal (tablet-like form factor) that relied entirely on software “in the infrastructure” (i.e. running on powerful servers) to provide the intelligence for the system. Later, Eric and Prof. Randy Katz started the BARWAN/Daedalus project to further explore how to facilitate wireless Internet access for heterogeneous devices over a range of wireless networks. We extended the InfoPad ideas in two distinct directions.
The first was the development of a systematic framework for placing intermediaries,
called proxies, between clients and servers, to compensate for limitations of the client or network (small screen, low bandwidth, etc.). We extended the original InfoPad implementation of this concept into a mechanism for datatype-specific distillation; around the same time, Elan Amir had been working on a similar concept for proxying streaming video (the Video Gateway), and we co-authored a paper with Elan for ASPLOS-VII that generalized the concept of datatype-specific distillation to other domains and gave measurements to show why it was a good approach for adapting to client and network heterogeneity in general.
The second direction involved the deployment of such active proxy services to very large communities of users. We built a highly scalable and robust cluster-based version of the active proxy, which we reported on in SOSP-16; it is the prototype of many cluster-based Internet applications. The real contribution was the separation of scalability and availability concerns from the main application logic, through the careful application of a small number of simple robustness mechanisms. Although this imposed some constraints on how applications could be structured, we found that a broad class of Internet applications could benefit from our framework. The programming model, called TACC (Transformation, Aggregation, Caching, Customization, for the four elements that seem to dominate interactive Internet applications), emphasized composition of existing services and modules to quickly create new services; the server, called SNS (Scalable Network Server), was a prototype cluster-based TACC server that was deployed on the Berkeley Network of Workstations (NOW) and for a while served 10,000-15,000 users.
An interesting prototype application was Top Gun Wingman, the first graphical Web browser for the PalmPilot PDA, which relied heavily on the proxy to deliver a good browsing experience on the thinnest of clients. We reported on this work in Middleware 98, and some aspects of it were commercialized and extended by ProxiNet, Inc., which I co-founded in 1997 and was later acquired by Puma Technology.
Areas left open included the question whether the looser semantics afforded by the TACC model (we called them BASE, for best-effort availability, soft state, and eventual consistency, to distinguish them from ACID) could be applied in other domains and if so, what consistency and state-management guarantees could be made to such applications. Eric Brewer and I co-authored a paper for HotOS-VII that discusses the trade-offs between harvest (quality of an answer from a query engine) and yield (probability that an answer will be received); we identified some structural commonality between the Inktomi search engine and various TACC applications, such that in both cases, a harvest vs. yield tradeoff was one of the key engineering mechanisms that made the application tractable to deploy and operate (from a scalability and robustness point of view). Also, Dr. Murray Mazer started to think about a precise characterization of the state management and consistency requirements of interactive Internet applications, with the goal of coming up with a more formal framework for discussing how harvest vs. yield tradeoffs work and
illustrating how they can be applied to ease the engineering of applications with severe operational requirements while being able to quantify the effects of doing so on the applications’ semantic behavior.