Improving the performance of CGI compliant database gateways

Stathes P. Hadjiefthymiades, Drakoulis I. Martakos
Department of Informatics, University of Athens
TYPA Building, Panepistimioupolis
Ilisia, 15784 Athens, Greece
Tel: +031 7248154, Fax: +301 7219561,


The phenomenal growth that the World Wide Web (WWW) service currently experiences necessitates the adaptation of legacy information systems like RDBMSs or full-text search systems to HTTP servers. Existing standards for performing this adaptation (i.e. CGI), although well established, prove highly inefficient throughout periods of heavy load (multiple hits per unit time). In this paper, after reviewing all the relevant mechanisms, we propose a generic architecture which adheres to the existing standards and client/server model and alleviates the performance handicap of classical database gateways. The performance evaluation which was realised as part of this research effort revealed a noteworthy superiority of the proposed architecture with respect to monolithic (non-client/server) CGI based approaches.
Keywords: WWW-to-DBMS links, CGI, RDBMS, Dynamic SQL, Inter-Process Communication

1. Introduction

Nowadays, it is widely agreed that WWW has become the defacto standard for the deployment of telematic applications in wide area networks (WANs). Furthermore, WWW is adopted as the core technology for the introduction of intranet applications. This hypermedia system, adhering to the client-server model of computation, was conceived and developed at CERN, European Particle Physics Laboratory in Geneva by Tim Berners-Lee and his colleagues [1], [2]. WWW greatly owns its success in the standardisation which governs in the communication between information servers (WWW servers) and clients (WWW browsers). The three standards which are primarily involved in this communication are: URI (Universal Resource Identifiers), HTTP (HyperText Transfer Protocol) and HTML (HyperText Markup Language).

WWW servers carry specialised software, called HTTPd (HTTP demon), which receives and dispatches HTTP requests. The need to incorporate information sources other than static HTML files (e.g. databases) forced the standardisation of the communication between HTTPds and application programmes. Such standardisation efforts led to the specification of Common Gateway Interface (CGI) [3]. Throughout the evolution of WWW, key industrial players like Netscape and Microsoft introduced their own, proprietary mechanisms (e.g. NSAPI, ISAPI) as enhanced and elaborated alternatives for performing similar tasks (dynamic generation of pages, extension of basic server's functionality). However, there is an on-going discussion in the WWW community about those two schools of thought (CGI Vs proprietary, C callable APIs). Both alternatives are faced with scepticism due to their characteristics [4]. As shown below, the strengths of CGI are the weaknesses of proprietary APIs and vice versa.

CGI is the most widely deployed mechanism for integrating HTTP servers with other information systems. Its design does not scale to the performance requirements of contemporary applications. Moreover, CGI applications do not run in the HTTPd process. In addition to the performance cost, this means that CGI applications can't modify the behaviour of HTTPd's internal operations, such as logging and authorisation. Finally, CGI is viewed as a security issue by some server operators, due to its connection to a user-level shell. The APIs introduced by Netscape, Microsoft or other servers (e.g. Apache) can be considered an efficient alternative to CGI. This is mainly attributed to the fact that server APIs entail a considerable performance increase and load decrease as gateway programs run in or as part of the server processes (instead of starting a new process for each new request as CGI specifies). Furthermore, through the APIs, the operation of the server process can be customised to the individual needs of each site. The main disadvantages of the API solution include the limited portability of the gateway code which is attributed to the absence of standardisation (completely different syntaxes and command sets). The choice for the programming language in API configurations is rather restricted if compared to CGI (C Vs C, Perl, Tcl/Tk, Rexx, Python and a wide range of other languages). As API-based programmes are allowed to modify the basic functionality offered by the HTTP demon, there is always the concern of buggy code that may lead to core dumps or other similar problems.

One form of gateway programmes which has drawn the attention of the WWW community during the past years, concerns the connectivity to relational database systems (RDBMSs). Such connectivity has been a research issue for a prolonged period of time [5], [6], [7] while many relevant tools emerged in the software market [8], [9]. Issues-problems associated with the deployment of database gateways include: portability among systems, generality, compliance to standards, performance, stateful/stateless orientation. In [10] a framework for the deployment of databases on the WWW was proposed. The main advantages of this framework were generality and compliance to existing standards. In this paper the performance problem of database gateways is also addressed. Existing database gateways, due to their support for CGI or proprietary APIs, inherit their strengths and weaknesses (as they have been discussed in the previous paragraph). We pursue the design of architecture and the development of a software prototype in which performance improvement is achieved by adherence to the wide-spread & portable CGI and the generic database access mechanism proposed in [10].

This paper is structured as follows. Section II discusses the performance behaviour of classical database CGI gateways and identifies the need for their re-design. This need is attributed to the large number of CGI scripts which are independently spawned by the HTTPd during periods of heavy load. Each script reserves resources by establishing communication with the serving processes of the management system. This activity is extremely costly and thus, degrades the speed of database access. Sections III and IV present an innovative software architecture which reduces - eliminates the need for such a costly operation and thus, improves the associated performance. We propose a client/server configuration in which small, concise and portable clients, complying to the CGI standard, are spawned by the HTTP demon; their communication with the management system is possible only through a properly structured database agent (server) which, in turn, is both portable and generic. A protocol has been designed for this communication taking into account the individualities of both the relational system and the HTTP demon. In this paper, we focus on those gateways that simply, retrieve information (dispatch SQL SELECT statements). Technical issues, associated with the development of this client/server architecture, are discussed in detail, including optimisations (fragmentation of responses, etc.). In Section V we present the results of a series of tests realised for the performance evaluation of a prototype built using the proposed architecture. Monolithic versions of CGI scripts, which provided identical functionality, were also subjected to the same tests to help identify the qualitative benefits obtained by the discussed optimisation. Finally, Section VI points out areas of architecture-prototype improvement and further research.

2. Deficiency of CGI compliant database gateways

Figure 1 provides an overview of the strategy followed by WWW servers for the activation-invocation of external programmes (gateways). The same approach is also used for the database gateways which constitute the main issue under consideration in this paper. We employed the Message Sequence Chart (MSC) notation (CCITT Recommendation Z.120) which provides a very comprehensible way of denoting the sequence of message exchanges and process instantiations [11].

Env (environment) represents active WWW browsers (two in the scenario sketched in Fig.1). Requests are transmitted to the WWW server (HTTPd) using the HyperText Transfer Protocol in conjunction with the URL encoding scheme. The two requests shown pertain to the same script and not some static HTML page.

Figure 1: Activation of CGI processes.

Upon reception of the first REQUEST, the server, obeying the CGI specification, spawns the first instance of the designated CGI script. While the latter performs the required processing (i.e., database access) a new request for the same script arrives at the server. Despite the existence of a process instance, a new one is forked independently. As soon as both instances complete the required processing and pass their results to the WWW server they are terminated (x mark in the MSC). Figure 2 provides a more detailed view of the life time of the database gateways presented in Figure 1.

Figure 2: Information retrieval database gateways.

The establishment of connections to the database management system causes the reservation of resources (memory, processes or threads, etc.) and requires a significant amount of time for its completion. When multiple instances of the CGI script are simultaneously active (many HTTP requests in progress at the same time; as shown in Figure 3), this resource consumption is becoming noteworthy and response times increase. The architecture implied in Figures 1 and 2 needs to be redesigned to overcome the presented deficiency.

Figure 3. Multiple CGI scripts accessing the same DB.

3. Proposed Architecture

As pointed out in Section II and deduced by Section IV, resource reservation by the CGI script is one of the more time-consuming tasks throughout the database access. In this section we propose a client/server architecture whose principal objective is to drastically reduce the need for such reservation and establishment of connection towards the management system. The modified MSC of the new architecture is provided in Figure 4.

Figure 4. Message sequence chart of the new architecture.

The core component of our architecture is a database agent (Figure 4) which is permanently attached to the management system and acts as the server process. This component incurs the cost of resource reservation. Newly spawned CGI processes (act as clients) need not consume time for such operation. Upon system's initialisation, the agent is not associated with any of the databases managed by the system. Such association is triggered by the first incoming HTTP request which requires a specific database to be opened and accessed. Subsequent requests, pertaining to the same database (such case is considered highly probable) leave the state of the agent unaffected.

The agent receives SQL statements from the CGI scripts, executes them on the designated database (which has already been opened and activated) and returns the results to the originators of the respective requests. In this architecture, CGI processes do not interface directly to the DBMS. Their only engagement (prior to the database access) is the formulation of SQL statements on the basis of information (parameters) conveyed in the HTTP request. The flowcharts presenting the internal structure of both the client (script in Figure 4) and the server (database agent in Figure 4) processes are provided in Figures 5.a and 5.b respectively.

Figure 5. Flowcharts of the basic components of the architecture.

One of the most crucial aspects of the architecture presented above is communication between the server and client processes. The specification of such communication should encompass the design of a database oriented protocol as well as the selection of an IPC (InterProcess Communication) mechanism suitable for its implementation. The protocol is presented in the following paragraph while the selected IPC mechanism is discussed in detail in Section IV.

The protocol under discussion comprises only two message structures. The first refers to requests transmitted by CGI scripts-clients. As shown in Figure 5.a, CGI scripts are responsible for URL decoding the activation parameters (contents of QUERY_STRING or standard input, name-value pairs, etc.), compose a request (CGI_Request) intended for the server process and proceed with its transmission. Such message should indicate the database to be accessed, the SQL statement to be executed, an identifier of the transmitting entity as well as the layout of the anticipated results (with respect to the HyperText Markup Language; HTML). In Figure 6 we provide the Backus-Naur Form (augmented BNF [12]) of CGI_Request.

CGI_Request = database_name sql_statement [client_identifier] results_layout
database_name = *OCTET
sql_statement = *OCTET
client_identifier = *DIGIT ; UNIX PID
results_layout = "TABLE" | "PRE" | "OPTION"
Figure 6. BNF of CGI_Request.

As shown in Figure 6, the client_identifier is the process ID (integer, long integer) of the CGI script which generated the request. The role of this field will be discussed in more detail in subsequent paragraphs (Section IV). This field is optional, depending on the IPC mechanism used. *OCTET denotes a sequence of printable characters and thus, represents a text field. Results are communicated back to the client processes by means of the second message of the protocol which will be referred to as SRV_Response. The BNF of SRV_Response is provided in Figure 7.

SRV_Response = response_from_db_server continue_flow
response_from_db_server = *OCTET
continue_flow = "YES" | "NO"
Figure 7. BNF of SRV_Response.

The response_from_db_server (text) field contains the actual information which was retrieved by the server process from the designated database. Such information is returned to the client, embedded in valid HTML commands. The type of commands used is the one specified in the results_layout field of CGI_Request. The continue_flow field is used for optimising the transmission of results back to the client process. The type of optimisation which was implemented in our prototype will be discussed in more detail in Section IV.

4. Technical Issues

Message Queues [13], one of the IPC mechanisms found in contemporary UNIXes, fulfil the requirements presented in Section III. Message Queues are stored in the system's kernel and have identifiers associated with them. We can think of a kernel hosted message queue as a linked list of messages as shown in Figure 8.

Figure 8. Message queue structures in kernel.

A variety of message structures can be stored in the same queue (as shown in Figure 8 where adjacent messages have different lengths). Furthermore, messages can be placed or retrieved from the structure by any active process of the system. This IPC mechanism is not limited to just two processes as other mechanisms like pipes and FIFOs. Message queues are not based on the stream model where the exchanged data aren't structured. As the prototype was programmed in C and Embedded SQL, it was possible to store whole struct instances to the queue supporting the architecture. No additional manipulation of messages was required in contrast to stream based IPC mechanisms that require communicating parties to a-priori agree on a certain protocol for the interpretation of the data stream. Berkeley Sockets can also accommodate the protocol described in Section III and are the choice when network communication is involved between scripts and the database agent (see Section VI).

For the deployment of the architecture only one message queue was used leading to what is shown in Figure 9. This queue is the recipient of messages generated by the CGI scripts as well as the responses produced by the server process. The queue is created by the server process once, upon system's initialisation.

Figure 9. Client/server architecture based on single message queue.

As all involved parties use the same structure for communicating information it is essential to devise a multiplexing/demultiplexing scheme for the messages stored in the queue of Figure 9. Such multiplexing/demultiplexing mechanism was based on the mtype field which is the only mandatory field in the messages intended for the considered structure. Such feature, in addition to fully controllable blocking, renders the structure very flexible in terms of manipulation. Processes can inquire the structure for messages carrying a specific mtype value and remain blocked until such messages enter the queue. Messages submitted to the queue by CGI scripts should have a pre-defined integer value in their mtype field in order to be read by the database agent. CGI scripts also place their PID in the client_identifier field of requests which is subsequently placed in the mtype field of respective requests. After the submission of the message, client processes remain blocked awaiting for a message having their PID as the value of the mtype field. The server, while dispatching database accesses, manipulates mtype fields as shown in Figure 10.

Figure 10. Manipulation of mtype values by the server process

The protocol described in Section III leaves space for optimisation concerning the flow of responses. If the database agent produced a single response message per request, queries returning a significant amount of data would delay, without reason, the transmission of results back to the WWW browsers. To optimise, the database agent breaks the results' stream in multiple response messages of fixed length (~512 bytes in the prototype). Since messages with the same mtype value are not resequenced by the queue (FIFO), no sequence numbers are included in the response messages produced by the agent. The transmission of messages is initiated by the database agent as soon as its output buffers reach the pre-defined limit. In such case, although additional tuples remain to be retrieved from the interrogated database, the content of the output buffer is flushed to one additional response message. As soon as response messages are read by the client process, their contents are printed to standard output (returned to HTTPd). The value of the continue_flow field in the last message pertaining to a specific request is "NO". Thus the client is notified that the transmission of the response by the server has been completed. Figure 11 provides a graphical overview of the discussed optimisation.

Figure 11. Multiple response messages per request.

One additional reason which dictates this response fragmentation is that the default size of messages stored in message queues is limited to 8 KB for the majority of UNIX flavours (default values require kernel reconfiguration in order to change). The benefit of adopting the fragmentation of responses produced by the database agent is illustrated in Figure 12.

Figure 12. Performance gain due to response fragmentation.

The server process should be able to dispatch any SQL statement irrespective of the database-table and field combination it addresses. In this respect, query execution is implemented through Dynamic SQL which has been subjected to standardisation by X/Open. Dynamic SQL allows applications to interact with database tables and fields without prior knowledge of their structure, datatypes, length etc. (fully dynamic access) [14]. Data retrieval throughout the prototype is based on the Dynamic SQL capabilities of the X/Open compliant Informix RDBMS [15].

When executing a query in Dynamic SQL, space in memory is allocated ad hoc, according to the contents of the system Descriptor Area (DA). DA is a fully standardised (X/Open) memory structure (a perplexing combination of pointers and arrays) indicating the number of columns fetched as well as their particular characteristics (datatype, length, name, precision, scale, etc.). Furthermore, DA contains pointers to the actual data. As the database access through 3-GLs (i.e. C, COBOL) is cursor based, pointers to data are updated each time a new row is fetched by the system. In our case, the server process scans the whole Descriptor Area after each invocation of the cursor FETCH command and prints its contents according to the results_layout field of the CGI_Request. The dynamic mechanism for database access is presented in the flowchart of Figure 13.

Figure 13. Information retrieval through the Descriptor Area.

The aforementioned X/Open structure rendered our prototype capable of accessing any database table, without the need for hardcoded definitions, system tables lookup etc. In addition to the above, through the functionality of the DATABASE command (Embedded SQL) the prototype could access the whole range of available databases.

Results are returned by the database agent in three different formats which are specified in the results_layout field of CGI_Request. HTML Tables ("TABLE") and preformatted text ("PRE") are mainly used for the tabular presentation of query results. The "OPTION" alternative is used for the population of combo boxes intended for Query By Example (QBE) forms [10]. QBE forms allow the complete specification of query criteria by the end-user.

5. Performance Evaluation

In this paper we adopted measurement to demonstrate that the proposed architecture performs significantly better than traditional database gateways and can be preferred over non-standardised solutions. From the three performance evaluation techniques (measurement, simulation and analytic modelling), measurement is considered the most accurate [16], but it is possible only after the implementation of the considered system. In our case, measurement was possible since we developed a software prototype implementing the design work presented in Sections III and IV. As pointed out, the prototype was built in C and Embedded SQL and executed on a typical SVR4 UNIX. The database management system was Informix's OnLine Dynamic Server ver. 7.2. Notable among the features of the Dynamic Server is its multi-threaded architecture and X/Open compatibility.

Figure 14. Hardware configuration for performance evaluation

Figure 14 presents the hardware configuration deployed in support of the performance evaluation procedure (experiment). The database management system was hosted by a Axil 320 workstation (HyperSparc 100 MHz, Solaris 2.5) with 64 MB of RAM. The same system also hosted Netscape's Fasttrack HTTP demon (ver. 2.0).

The experiment consisted of a series of trials in which a pinger program directed a number of HTTP requests towards the server (load generator). The pinger program executed on a MS-Windows NT Server (ver. 3.51) hosted by a Pentium 133 MHz machine with 16 MB of RAM. Both machines were interconnected by a 10Mbps Ethernet LAN and were isolated by any other computer to avoid additional traffic which could endanger the reliability of the experiment. Moreover, both systems were running only those processes needed in support of the experiment.

The pinger program was configured to request data from a CGI script using the GET method. The experiment was repeated twice: once for a typical CGI script and once for the discussed prototype (answers from the database agent were fragmented to 512 bytes messages). In both cases, the designated database access involved the exhaustive read of a relational table (SELECT * ....). Furthermore, the number of tuples extracted from the database as well as the HTML page returned to the pinger were identical. The size of the HTML page produced was 2.212 KB. The tuples extracted by the database were embedded in an HTML table (results_layout="TABLE"). It should be noted that the pinger program doesn't perform caching in contrast to typical WWW browsers.

The monolithic, typical CGI script under evaluation was also programmed in C and Embedded SQL. For the deployment of the script we used common Embedded SQL commands in conjunction with hardcoded definitions; we did not use the dynamic access mechanism (DA) presented in Section IV and [10]. Script's internal structure, although of a static character, is the most popular among the developers of database gateways in WWW sites.

The Fasttrack server is capable of forking a set of slave processes upon its initialisation (a mechanism also known as "pool of processes"). The master process accepts the requests from clients and passes the file descriptor to one of the slaves. This architecture reduces (or eliminates) the need to fork a new process for each incoming request thus, reducing the response time experienced by clients. In the experiment documented herein, Fasttrack was configured to pre-fork 4 processes (with up to 32 threads each). Access control was disabled in the HTTP demon (requests were dispatched irrespective of the IP address of their originator).

The pinger program was configured to simulate the traffic caused by up to 18 HTTP clients (starting from 2). Each trial consisted of 100 repetitions of the same request, thus allowing the experiment to reach a steady state. Upon trial's completion the pinger program updates an activity log with the following information:

From the above metrics we considered the first three as the most important and worth mentioning. Bytes Send and Bytes Received were recorded and compared with the purpose of verifying that the total size (in bytes) of HTTP requests and responses were equal in both trials. The performance that both solutions (monolithic Vs client/server with responses fragmented to 512 bytes) demonstrated with respect to the considered metrics is illustrated in the following figures.

Figure 15. Response time Vs number of clients.

Figure 15 clearly shows that the proposed client/server configuration performs better than the monolithic traditional CGI script irrespective of the number of clients (threads of the pinger program) requesting data from the server. The performance gap of the two solutions increases proportionally to the number of clients.

Figure 16. Connect rate Vs number of clients.

Figure 16 shows the number of requests serviced by the HTTP demon per unit time. This appears to be the most commonly discussed metric for Web servers. The pinger utility estimates the connect rate by counting the number of connections completed during the trial period and dividing by the length of the trial. The throughput of both solutions remains stable after the 4 clients' limit. The dispatching capacity of the client/server configuration is higher than that of the monolithic solution by 2 connections per second.

Figure 17. Connect time Vs number of clients.

Figure 17 concludes the comparison of the two solutions by illustrating the time needed for the establishment of network connections as a function of the number of clients. Even in this metric, the suggested solution performs better than traditional gateways.

Apart from the comparative analysis of the two scenarios, we performed some individual measurements of the client/server solution with responses fragmented to 1024 bytes. Such measurements cover up to 10 simultaneous users (threads of the pinger utility) and were plotted in conjunction with the results of the 512 bytes scenario. We only present the Connect Time (Figure 18) and Response Time (Figure 19) as functions of the number of HTTP clients since the Connect Rate was equal to that recorded in the 512 bytes case.

Figure 18. Connect time Vs number of clients.

Figure 19. Response time Vs number of clients.

It is clear from Figures 18 and 19 that dividing the size of responses by a factor of 2 slightly improves the performance of the considered configuration.

VI. Summary - Directions for Further Work
We proposed a software architecture which could be employed in WWW server - RDBMS combinations where response times and compliance to existing standards are considered a prerequisite. Thus, webmasters need not resort to proprietary and difficult to program protocols to extend the basic functionality provided by HTTP demons. The principal objective pursued throughout the design of the architecture was the reduction of the rate at which scripts establish new connections with the underlying relational system. Such task is typical in existing database gateways (adhering to the CGI specification) but also time and resource consuming.

It was shown that the prototype adopting the proposed client/server architecture and related protocols performed significantly better than monolithic CGI scripts, developed using typical database APIs (Embedded SQL). Standardised mechanisms like Message Queues and Dynamic SQL were used to ensure the portability of the prototype to other UNIXes and relational management systems (X/Open compliant). Further provision was taken to optimise the operation of the client/server architecture using the fragmentation of responses produced by the database agent.

It was also demonstrated how two software components, the database agent that has an unlimited life-time (demon process) and the client processes (CGI scripts) whose execution is terminated as soon as results are passed to the server, can be combined to deliver WWW services at acceptable response times. The design of the database agent provides for the preservation of state information between consecutive accesses. In the prototype, the database agent simply remains attached to the database accessed by the most recently dispatched request. This basic architecture is currently expanded to deal with more complex situations and thus, resolve the stateful/stateless problem associated with the operation of the WWW [17]. CGI specifies a set of variables which could be used for the identification of individual sessions [3].

Message Queues, due to their kernel based operation, limit the communication between CGI processes and the database agent to a single computer which should host both the WWW server (HTTPd) and the database management system. Although this centralised configuration is encountered in most of the DB-powered WWW sites it should be considered as a special case of a distributed set-up where the database server operates on a different machine from that of the HTTP demon (Figure 20). In such case, the messages introduced in Section III should be exchanged over a network connection using a transport layer protocol like TCP. Berkeley Sockets [18] constitute an IPC mechanism which could be used efficiently in both the centralised and distributed configurations and thus, satisfy the posed requirement for a general architecture. Furthermore, Sockets allow the communication between co-operating processes to be realised either in stream or in structured (message oriented) mode. The stream mode could be employed by the database agent for communicating results back to the CGI processes (clients) on a character-by-character basis. This approach is expected to entail some noteworthy results in the response times of the system.

Figure 20. A distributed configuration of CGI compliant database gateways.

Although our prototype addresses the performance problem of CGI compliant database gateways it provides the means for the deployment of generic interfaces. The database agent can dispatch any SQL statement (currently limited to SELECTs) due to the adoption of the X/Open DA mechanism. Client processes can either have the SQL clause hardcoded or formulate queries ad-hoc, on the basis of FORM parameters (name-value pairs returned by HTML forms) and meta-information retrieved from specialised files (Query Specification Files, [10]). The protocol proposed in Section III allows for the easy deployment of both scenarios.

Further research in this area could include the determination of the optimum maximum size for system message queues. Such size should be properly adjusted to prevent the overflow of the memory structure (which leads to clients' crashes). As the maximum size of message queues is a kernel parameter, its modification requires kernel rebuild. The optimum size is strongly dependent on the size of the HTML page returned to the browser as well as the fragmentation level of responses generated by the database agent. Furthermore, the discussed size is a function of the dispatching capacity of the database agent.


We would like to acknowledge the support provided by the Greek distributor of Informix products, Ergodata S.A. who were kind enough to provide the necessary database software for the deployment of the prototype and helped in configuring it appropriately. We would also like to express our thanks to Dr. Costas Vasilakis and Mrs. Anne Sotiropoulou for their assistance in installing and configuring the hardware set-up for the experiment documented herein.


[1] Berners-Lee T. and Cailliau R., World Wide Web Proposal for a HyperText Project, CERN European Laboratory for Particle Physics, Geneva CH, November (1990).
[2] Berners-Lee T., Cailliau R., Luotonen A., Frystyk Nielsen H. and Secret A., The World-Wide Web, Communications of the ACM, 37(8) (1994).
[3] Robinson D., The WWW Common Gateway Interface Version 1.1, Internet Draft, January (1996).
[4] Everitt P., The ILU Requested: Object Services in HTTP Servers, W3C Informational Draft, March (1996).
[5] Eichmann D., McGregor T. and Danley D., Integrating Structured Databases Into the Web: The MORE System, in the proceedings of the First International WWW Conference, Computer Networks and ISDN Systems 27(6) (1994).
[6] Perrochon L., W3 "Middleware": Notion and Concepts, Workshop on Web Access to Legacy Data, Boston, MA, December (1995).
[7] Eichmann D., Application Architectures for Web-Based Data Access, Workshop on Web Access to Legacy Data, Boston, MA, December (1995).
[8] Microsoft dbWeb 1.1 Tutorial, Microsoft Corporation (1996).
[9] WebDBC White Paper #1, A Quick Overview of the WebDBC 1.0 Architecture, Nomad Development Corporation (1995).
[10] Hadjiefthymiades S. and Martakos D., A generic framework for the deployment of structured databases on the World Wide Web, in the proceedings of the Fifth International WWW Conference, Computer Networks and ISDN Systems 28(7-11) (1996).
[11] Braek Rolv and Haugen Oystein, Engineering Real Time Systems, Prentice Hall (1993).
[12] Crocker D.H., Standard for the Format of ARPA Internet Text Messages, STD11, RFC 822, UDEL, August (1982).
[13] Stevens W.R., UNIX Network Programming, Prentice Hall (1990).
[14] Date C.J., An Introduction to Database Systems, Addison-Wesley (1995).
[15] Informix-ESQL/C Programmer's Manual, Informix Software Inc. (1996).
[16] Ibe O.C., Choi H. and Trivedi K.S., Performance Evaluation of Client-Server Systems, IEEE Transactions on Parallel and Distributed Systems 4(11) (1993).
[17] Perrochon L., Translation Servers: Gateways Between Stateless and Stateful Information Systems, Institut fur Informationssysteme, ETH Zurich, Technical Report 1994PA-nsc94 (1994).
[18] Commer D.E. and Stevens D.L., Internetworking with TCP/IP, Vol. III, Client-Server Programming and Applications, Prentice Hall (1994).

Return to Top of Page
Return to Technical Papers Index