Data Between Database Servers example essay topic

638 words
Okay, on to CLSA. Here's a base case on why CLSA / e Equity Access nixed Sybase ASE 11.9 for Microsoft SQL Server 2000 on their research database... Problem to solve: Provide a mechanism that will enable secure electronic contribution and storage of information related to equity research data, as well as, a mechanism to enable secure electronic distribution of the information stored in a repository or data warehouse. Factors affecting the implementation included: Providing access to the mechanisms via intranet and internet connectivity Ensuring the contribution / storage / notification process can be performed in 5 seconds once the information has been submitted for confirmation Ensuring performance (i. e., user response time) as it relates to the distribution of information can be performed in 5 seconds once requested The contribution function must support no less than 1000 simultaneous users located in 15 countries.

The distribution mechanism must support not less than 2,000,000 simultaneous users located in 57 countries. Rapid startup time (3 months for a prototype - 9 months for production) Those were my requirements... The issues: Our parent firm was already using a home-grown research database for contribution and distribution of 400 GB of research information. It was suggested that we copy their setup and code to meet our rapid startup requirement. Our parent firm had implemented a Sybase ASE 11.9 database running on high-end HP Unix servers (hp super dome with 256 GB memory with a 2000 GB (2 TB) mirrored storage array) engineered for large-scale databases. This high-end architecture is ideal for the specific demands of online transaction processing workloads and high availability (never goes down).

Unfortunately, the whole setup is very expensive and very centralized (Hong Kong). In order to meet their performance criteria, they resorted to distributing all or parts of the database across multiple servers in different time zones. This was very expensive solution to maintain because of the database licensing, hardware, communication costs associated with distributing the data, and personnel to support the distributed databases (3 databases x 55 tables in each database x 4 countries). We were starting with 6 countries and over 100,000 simultaneous users. The parent solution cost would have an initial capital expenditure of $4,200,000 and a run rate of $46,000 per month (to start). Our budget to start was somewhere around $125,000 (not including personnel).

This is how I make a living... Using a copy of Microsoft SQL Server 2000, I prototyped the database and supporting structures (indexes, stored pros, tables, etc) in two weeks using our parent firm and another successful internet-based research supplier as templates for our base service. The license for the software was$900, and the server was already being used for other non-production work. SQL Server license was free because we already had a development license. I set SQL (ANSI SQL-92), JAVA (J 2 EE), Javascript, XML, and HTL M as the baseline for our standards for development. Notepad was the development tool.

I picked up a free evaluation copy of Cold Fusion 5.0 for components and downloaded JDK 1.3 from Sun. com for the Java development environment. The majority of the database activity revolves around a master transaction table for collecting and distributing data. Seperate reference tables (in 1-to-many relationships with the transaction tables) are used for user access, markets, countries, clients, contributors, names, address, etc, in order to minimize table maintenance and complicated screen inputs. Command-line batch jobs using SQL BULK INSERT we created to transfer data between database servers.

In six weeks we were ready to test the system's performance on a Compaq Proliant 8500 class machine with 8 CPUs, 16 GB memory, with external storage for 200 GB. We had.