Traditional Culture Encyclopedia - Traditional festivals - What are the types of databases, EXCEL database belongs to which type?
What are the types of databases, EXCEL database belongs to which type?
I. Mesh Databases
The first to appear was the mesh DBMS. records are used as the unit of storage of data in the mesh model. A record contains a number of data items. Data items in a mesh database can be multi-valued and composite data. Each record has an internal identifier that uniquely identifies it, called a DatabaseKey (DBK), which is automatically assigned by the DBMS when a record is deposited into the database.The DBK can be thought of as the logical address of the record, which can be used as a stand-in for the record, or to find the record. Mesh database is a navigation (Navigation) database, the user in the operation of the database not only to do what, but also explain how to do. For example, in the find statement not only to find the object, but also to specify the access path.
The world's first mesh database management system is also the first DBMS is the U.S. General Electric Bachman and others in 1964 to develop a successful IDS (IntegratedDataStore). IDS laid the foundation of the mesh database, and at that time has been widely issued and applied. 1971, the United States DBTG (DataBaseTaskGroup) in CODASYL (ConferenceonDataSystemsLanguages) put forward a famous DBTG report, which defined the mesh data model and language, and then made modifications in 1978 and 1981 and additions. Therefore, the mesh data model is also known as CODASYL model or DBTG model. 1984 American National Standards Institute (ANSI) proposed a recommended standard for NetworkDefinitionLanguage (NDL). In the 1970s, there were a large number of DBMS products for mesh databases. The more famous ones are IDMS from Cullinet Software, IDSII from Honeywell, DMS1100 from Univac (later merged into Unisys), and IMAGE from HP. The mesh database model is more natural for both hierarchical and non-hierarchical structures, and before the advent of relational databases mesh DBMSs were more commonly used than hierarchical DBMSs. In the history of database development, mesh databases occupy an important position.
Two, hierarchical database
Hierarchical database management system is immediately after the emergence of network database. Many things in the real world are organized hierarchically. Hierarchical data model is proposed, first of all, to simulate this hierarchical organization of things. Hierarchical databases also access data by record. The most basic data relationship in the hierarchical data model is the basic hierarchical relationship, which represents a one-to-many relationship between two record types, also called a parent child relationship (PCR). There is and only one record type in the database that has no dual parent, called the root node. The other record type has and only has one biparent. The mapping from a node to its biparent is unique in the hierarchical model, so for each record type (except the root node) only its biparent needs to be indicated to represent the overall structure of the hierarchical model. Hierarchical models are tree-like.
The most famous and typical hierarchical database system is IBM's IMS (Information Management System), the earliest large database system program product developed by IBM. Since its creation in the late 1960s, it has now evolved to IMSV6, which provides support for advanced features such as clustering, N-way data ***sharing, and message queuing ***sharing. This 30-year-old database product is playing a new role in today's WWW application connectivity, business intelligence applications.
Three, relational databases
Relational modeling
Mesh databases and hierarchical databases have been a good solution to the centralization of data and **** enjoyment of the problem, but there is still a great deal of data independence and level of abstraction is still lacking. Users in these two databases for access, still need to specify the data storage structure, pointing out the access path. The later emergence of relational databases better solve these problems. The theory of relational database appeared in the late 60's to the early 70's. In 1970, IBM researcher Dr. E.F. Codd published "large **** enjoy the data bank of the relational model," an article put forward the concept of relational model. Later Codd published several articles, laying the foundation of relational databases. The relational model has a strict mathematical foundation, a relatively high level of abstraction, and is simple and clear, easy to understand and use. However, at the time, there were those who considered the relational model to be an idealized data model that was unrealistic to use to implement a DBMS, and were particularly concerned that the performance of relational databases would be unacceptable, and even more so that some viewed it as a serious threat to the normalization of mesh databases, which was in progress at the time. To promote understanding of the issues, the ACM took the lead in organizing a symposium in 1974, at which a debate between the pro- and anti-relational database factions, led by Codd and Bachman, respectively, took place. This famous debate spurred the development of relational databases, which eventually became the dominant modern database product.
The relational data model provides characteristics and functional requirements for relational operations, but does not give specific syntax requirements for the language of the DBMS. Operations on relational databases are highly non-procedural, and the user does not need to point out special access paths; the choice of paths is left to the optimization mechanisms of the DBMS.Codd's papers in the early 1970s dealt with the theory of paradigms and the twelve criteria for measuring relational systems, laying down the foundations of relational databases with mathematical theory.Dr. Codd was also honored for his outstanding contributions to relational databases with the 1983 ACM Turing Award.
The relational data model was developed based on the concept of relationship in set theory. Both entities and links between entities in the relational model are represented by a single structural type, the relationship. Relationships in an actual relational database are also called tables. A relational database is made up of several tables.
The emergence and development of the SQL language
In 1974, IBM's Ray Boyce and Don Chamberlin presented the mathematical definitions of Codd's 12 guidelines for relational databases in a simple keyword syntax, marking a milestone in the introduction of the SQL (Structured Query Language) language. The SQL language, with its functions of querying, manipulating, defining, and controlling, is a comprehensive, general-purpose relational database language, and at the same time a highly non-procedural language that requires the user to indicate what to do rather than how to do it.SQL integrates all the operations of the database life cycle. Since its inception, the SQL language has become a litmus test for relational databases, and every change to the SQL language standard has guided the direction of relational database products.
While progress was being made on the SQL language, the IBM Research Center began work on the SystemR project in 1973. Its goal was to demonstrate the feasibility of a full-featured relational DBMS. The project ended in 1979 with the completion of the first DBMS to implement SQL.In 1986, ANSI adopted SQL as the U.S. standard for relational database languages, and the text of the standard SQL was published the same year. There are currently 3 versions of the SQL standard. The basic SQL definition is ANSIX3135-89, "Database Language - SQL with Integrity Enhancement" [ANS89], generally called SQL-89.SQL-89 defines schema definitions, data manipulation, and transaction processing.SQL-89 and the subsequent ANSIX3168- SQL-89 and the subsequent ANSIX3168-1989, "DatabaseLanguage-EmbeddedSQL", constituted the first generation of SQL standards.ANSIX3135-1992 [ANS92] describes an enhanced version of SQL, now known as the SQL-92 standard.SQL-92 includes enhanced features for schema manipulation, dynamic creation and dynamic execution of SQL statements, and network environment support. Enhanced Features. After the completion of the SQL-92 standard, ANSI and ISO began working together to develop the SQL3 standard, which is characterized by support for abstract data types and provides a standard for a new generation of object-relational databases.
Part II: Introduction to Mainstream Relational Database Software
Codd's theory of relational databases categorizes relational systems into four levels: tabular systems, (minimal) relational systems, relationally complete systems, and fully relational systems. There is no database system that is yet a fully relational system. A truly relational system should be at least a relationally complete system. Modern mainstream relational database products are relationally complete.
I. IBM's DB2 / DB2 universal database
As a pioneer and leader in the field of relational databases, IBM began providing integrated database servers -- System/38 in 1980, followed by SQL/DSforVSE and VM, whose initial versions were closely related to the SystemR research prototype. prototype.DB2forMVSV1 was introduced in 1983. The goal of this release was to deliver the simplicity, data irrelevance, and user productivity promised by this new solution.Subsequent releases of DB2 have focused on improving its performance, reliability, and capacity to meet a wide range of business-critical industry needs.DB2forMVS provided robust online transaction processing (OLTP) support in 1988, and was implemented with remote work units in 1989 and distributed work units in 1993. units and distributed work units in 1989 and 1993, respectively, enabling distributed database support. The most recent release, DB2UniversalDatabase 6.1, is the paradigm for universal databases, being the first web-enabled multimedia relational database management system to support a range of platforms, including Linux. Its main new features include:
1) provides JavaStoredProcedureBuilder to support rapid development of server-side stored procedures.
2) Support for standard LDAP for communication with directory servers.
3) Enhanced conversion and migration tools.
4) Extended DB2 Universal Database Control Center, allowing the same graphical tools to be used for administrative tasks on more platforms.
5) Improved e-commerce performance with multiple e-commerce integration options.
6) With strong XML support.
Two, the history of Informix / InformixIDS2000
Informix was founded in 1980, the purpose is to Unix and other open operating systems to provide professional relational database products. The company's name, Informix, is derived from the combination of Information and Unix.
Informix's first true relational database product to support the SQL language was InformixSE (StandardEngine), which was characterized by its simplicity, light weight, and adaptability. It had a very large installed base, especially in the microcomputer Unix environment of the time, and became the main database product. It was also the first commercial database product to be ported to Linux.
In the early 1990s, online transaction processing became more and more major applications of relational databases, at the same time, the Client/Server structure is emerging. In order to meet the needs of online transaction processing based on the Client/Server environment, Informix introduced the concept of Client/Server in its database products, split the application's request to the database and the database's processing of the request, and introduced Informix-OnLine, one of the features of OnLine is the management of data One of the features of OnLine is a significant change in data management, i.e., data tables are no longer individual files, but database spaces and logical devices. Logical devices can be built not only on top of the file system, but also on partitions and bare devices of the hard disk. This resulted in increased data security.
In 1993, in order to overcome the performance limitations of multi-process systems, Informix rewrote the database core using a multi-threading mechanism, and at the beginning of the following year, Informix introduced the InformixDynamicServer, which is known as the "Dynamic Scalable Architecture" (DSA). In addition to the application threading mechanism, Informix introduced the concept of virtual processors in the database core, each of which is an Informix database server process. In DynamicServer, multiple threads can be executed in parallel in the virtual processor buffer pool, and each virtual processor is in turn scheduled for execution by the actual multiprocessor. What's more: to perform efficient and versatile tuning, Informix categorizes virtual processors according to different processing tasks. Each class is optimized to perform a specific function.
By the late 1990s, with the rise of the Internet, a tidal wave of applications such as electronic documents, pictures, video, spatial information, and Internet/Web applications flooded into the IT industry, and the data types managed by relational databases remained at the level of the 1960s and 1970s, such as numbers, strings, and dates, and their processing power became inadequate. 1992 saw the arrival of the famous database scholar, Ingres. In 1992, Prof. Michael Stonebraker, a database scholar and founder of Ingres at the University of California, Berkeley, proposed the object-relational database model, thus finding an effective way to solve the problem.
In 1995, Stonebraker and his research and development organization joined Informix, which made a new breakthrough in the direction of database development: in 1996, Informix introduced the Universal Data Option. This is an object-relational model of the database server; it is different from the solutions of other vendors' middleware, object-oriented expansion of the database from the internal aspects of the relational database server; the various mechanisms of the relational database abstraction, generalization. universalDataOption uses all the underlying technology of DynamicServer, such as DSA structure and parallel processing, while allowing users to build complex data types and user-defined data types in the database, while these data types can be defined for a variety of operations and operations to achieve object encapsulation. In the definition of operations and operations can be used in the database procedure language, C language, they are registered as part of the server.
In 1999, Informix further optimized the Universal Data Option to provide a complete tool environment for user-defined data types and operations. The new database core, named IDS.2000, is targeted at the next century of complex Internet-based database applications.
In fact, the popularity of the Internet began with the Web, which is known for its simplicity and graphical richness. But the HTML files that filled the system took us back to the days of the file system without us even realizing it. The first challenge encountered in the use of databases to manage Internet information is the management of complex information, and the emergence of the Internet has expanded the concept of "data" in practical applications. For this reason, since 1995, Informix has been working on the design of a new generation of database systems. As a professional database vendor, Informix first for the Internet applications in the diversity of data types, using object technology to extend the relational database system. The difference is that Informix is not the new data types written to death in the database core, but the database system in all aspects of the full abstraction, so that the user has the ability to define and describe their own needs to manage the data types, the types of data that can be managed to extend to unlimited, while adapting to the needs of future application development. This is Informix's new database server this year -- InformixDynamicServer.2000 (IDS.2000 for short).
In IDS.2000, another major contribution of Informix is to abstract the database access methods (indexing mechanism and query optimization) and open the interface. This allows users to define their own completely new indexing mechanisms for complex objects and integrate them throughout the database server. In IDS.2000, all user-defined data types, operations, and indexing mechanisms will be treated by the system in the same way as their built-in types, operations, and indexing mechanisms. IDS.2000 incorporates all database operations into the scope of standard database SQL, which is formally compatible with traditional relational databases, but adapts to the needs of the expansion of the concept of "data" to become a truly general-purpose database. Informix added a series of core extension modules on top of IDS.2000 to form Informix Internet Foundation.2000, a multifunctional database server for the Internet.
The main products of Informix are divided into three main parts:
Database server (database core)
Database server (database) core)
Application development tools
Network database interconnection products
There are two types of database servers, which provide data manipulation and management:
SE: completely based on UNIX operating system, mainly for non-multimedia applications with a small number of users
ONLINE: for a large number of users of on-line transaction processing and multimedia application environments
Application development tools are the environments and tools necessary for the development of applications, and there are two main series:
4GL: INFORMIX's traditional character-based interface development tools, and there are five main products in the series, which are I-SQL, 4GL RDS, 4GL C COMPILER, 4GL ID, and ESQL/C;
4GL: 4GL ID: 4GL C COMPILER, 4GL ID, and 4GL C COMPILER. ESQL/C;
NewEra: INFORMIX's latest offering of event-driven, object-oriented development tools based on a variety of graphical interfaces.
INFORMIX's Network Database Interconnect product: provides users with an application program interface based on a variety of industry standards, through which they can connect to other databases that adhere to these industry standards.
Three, the history of Sybase / Sybase ASE
Sybase was founded in 1984, the company name "Sybase" from the meaning of "system" and "database" combination of one of the founders of Sybase, Bob Epstein is Ingres University Edition (and System/R contemporaries). Bob Epstein, one of the founders of Sybase, was the principal designer of Ingres University Edition, a relational database modeling product that was contemporary with System/R. The company's first relational database product was SybaseSQLServer 1.0, introduced in May 1987.
Sybase first introduced the idea of the Client/Server database architecture and was the first to implement it in its own SybaseSQLServer. Prior to this, computer information is generally stored in a single host computer, end users generally through the character terminal management and access to the host, the vast majority of the processing is done by the host, the terminal is mainly to complete the input and simple display functions. The hardware and software costs of this host/terminal model are quite high, and small and medium-sized enterprises are generally unable to implement it. In the late 70's and early 80's, two far-reaching events occurred in the IT industry: the rapid popularization of PCs and local area networks (LANs), which are much more powerful than terminals, and the speed of LANs is much faster than that of the connection between host and terminal, and they are also much less expensive than host systems. mainframe functions, and these provide the hardware foundation for implementing Client/Server architectures.
In the Client/Server architecture, the server provides data storage and management functions, the client runs the corresponding application, through the network to obtain the services of the server, the use of database resources on the server. Clients and servers are connected through the network to become a collaborative system.Client/Server architecture will run on the host system of the original large-scale database system for appropriate division between the client and the server to carry out a reasonable distribution, in Sybase SQL Server, the database and applications are divided into the following logical functions: user interface (User Interface), representation logic (P), and the database and application (P). In Sybase SQL Server, the database and application are divided into the following logical functions: user interface (User Interface), representation logic (Presentation Logic), transaction logic (Transaction Logic), and data access (Data Access).Sybase's design idea is to put the transaction logic and data access on the server side of the processing, and put the user interface and representation logic on the client side of the processing.
Client/Server architecture puts the hardware and software reasonable configuration and design, greatly promoting the realization of the on-line enterprise information system at that time. Compared with the host/terminal model, Client/Server architecture can better realize the data services and applications **** enjoy, and the system is easy to expand, more flexible, simplifying the development of enterprise information systems. When the scale of the information system or demand for change, do not have to redesign and can be expanded and adjusted on the basis of the original, thus protecting the enterprise in the hardware and software on the existing investment.
"Client/Server architecture" soon became the main mode of enterprise information construction, and had a profound impact on the development of database and even IT industry.
In 1989, Sybase released OpenClient/OpenServer, a product that provides a consistent and open interface for different data sources and hundreds of tools and applications, and provides a very effective means of realizing interoperable systems in heterogeneous environments.
In November 1992, Sybase released SQLServer 10.0 and a series of new products (preceded by SQLServer versions 2.0, 4.2, 4.8, 4.9, and others), advancing SQLServer from a Client/Server system to one that supported an enterprise-class computing environment. Sybase calls this product line System10. it was designed with the idea that it could support enterprise-class databases (running Sybase and other vendors' database systems).
SybaseSQLServer 10.0 is the core of System10. Compared to version 4.9, a number of new features and functionality have been added: modified Transact-SQL is fully compliant with the ANSI-89 SQL standard as well as the ANSI-92 entry-level SQL standard, in addition to enhanced control over cursors, allowing applications to fetch data on a row-by-row basis, as well as allowing the entire data to be scrolled in both directions. In addition, a threshold manager was introduced.In 1995, Sybase introduced SybaseSQLServer 11.0.In addition to continuing to provide strong support for online transactions, Sybase added a number of new features in 11.0 to support online analytical processing and decision support systems.
In order to adapt to the changing needs of applications now and in the future, Sybase released the Adaptive Component Architecture (ACA) in April 1997.ACA is a 3-tier architecture: it includes a client, a middle tier, and a server. Each layer provides a running environment for the components. ACA architecture can be easily configured for each layer of the system according to the application requirements and adapted to the future development requirements. In line with the ACA architecture, Sybase renamed SQLServer to Adaptive Server Enterprise, version 11.5. In the ACA structure, two component concepts are proposed: logical components and data components. Logical components are components that implement application logic and can be developed in languages such as Java, C/C++, Power Builder, etc. They can follow current popular component standards such as Corba, ActiveX, and JavaBean. And data components can realize the storage and access to different types of data. Data components are provided by Adaptive Server Enterprise11.5 (ASE11.5 for short). These data components can not only complete the traditional relational data storage, but also can support a variety of complex data types, the user can be based on the user needs to store the type of data to install the appropriate data storage components, such as geospatial, time series, multimedia/images, text data and so on. It represents Sybase's technical strategy in addressing complex data types, multidimensional data types, and object data types.
ASE11.5 significantly enhances support for data warehousing and OLAP, introducing a logical process manager allowing users to select the runtime priority of objects.
Sybase released ASE11.9.2 in 1998, which featured the introduction of two new locking mechanisms to ensure system concurrency and performance: data page locks and data row locks, providing finer granularity. There are also improvements in query optimization.
---- into 1999, with the widespread use of the Internet, in order to help enterprises establish enterprise portal applications, Sybase put forward the "OpenDoor" program, an important part of which is the introduction of the latest enterprise portal-oriented ASE12.0. In order to meet the requirements of the enterprise portal, ASE12.0 in the productivity, usability and integration.
ASE12 provides good support for Java and XML, guarantees the integrity of distributed transactions by fully supporting the industry standard X/Open XA interface standard for distributed transaction processing and Microsoft's DTC standard, and has a built-in highly efficient transaction manager (TransactionManager) that supports high throughput of distributed transactions.
ASE12 utilizes cluster technology to reduce unplanned downtime. Not only does it support failover between two servers, it also supports automatic client failover.
----ASE12 provides support for ACE and Kerberos security modes, which allow users to provide more secure and encrypted network communications via ACE and Kerberos; ASE12 also provides online index rebuilding, where the data in the table can still be accessed while the index is rebuilt.
In terms of query optimization, ASE12 introduces a new algorithm called "Merge Join" that dramatically improves the speed of multi-table join queries; dynamic SQL statements can be executed through the executeimmediate statement; and user-defined persistent and complete query scenarios allow for more effective performance optimization. In addition, ASE12, along with other Sybase products such as Sybase Enterprise Application Server and Sybase Enterprise Event Broker, provides support for a complete standard Internet interface.
- Previous article:New Auto Show
- Next article: High value, too sultry, five self-owned brand cars that must be seen at Guangzhou Auto Show.
- Related articles
- Can I wash my hair during the Spring Festival? When can't I wash it?
- A large row of noodles was placed at a funeral banquet in Shaanxi, and Chinese cigarettes in Maotai, Tian Fei, filled the table. What problems does this expose?
- The difference between word document doc and docx
- Practicing inner strength and seeking transformation to prevent the epidemic in the battle against "property" highlights
- Economy of Nanlu Township
- Which is better, Longshan Lily or Longya Lily?
- Buick was exposed to apply for a new brand logo
- Middle-aged, older, middle-aged silly? How is the age group designation divided?
- I want to know about the development, change and trend of advertising in China.
- What's the difference between full contact karate and traditional karate?