Jumat, 21 Februari 2014

Download materi Database Management System (SQL Server+My SQL)



       Review of Database Systems Concepts

Chapters 1,2,3,4,5,8,9 in [R]
       [R] = Ramakrishnan & Gehrke: Database Management Systems, Third Edition
       What Is a DBMS?
        A very large, integrated collection of data.
        Models real-world enterprise.
      Entities (e.g., students, courses)
      Relationships (e.g., Madonna is taking CS564)
        A Database Management System (DBMS) is a software package designed to store and manage databases.
       Files vs. DBMS
       Application must stage large datasets between main memory and secondary storage (e.g., buffering, page-oriented access, 32-bit addressing, etc.)
       Special code for different queries
       Must protect data from inconsistency due to multiple concurrent users
       No Crash recovery
       No Security and access control
       Why Use a DBMS?
       Data independence and efficient access.
       Reduced application development time.
       Data integrity and security.
       Uniform data administration.
       Concurrent access, recovery from crashes.
       Why Study Databases??
         Shift from computation to information
      at the low end: scramble to webspace (a mess!)
      at the high end: scientific applications
         Datasets increasing in diversity and volume. 
      Digital libraries, interactive video, Human Genome project, EOS project 
      ...  need for DBMS exploding
         DBMS encompasses most of CS
      OS, languages, theory, AI,  multimedia, logic
       Data Models
         A data model  is a collection of concepts for describing data.
         A schema is a description of a particular collection of data, using the a given data model.
         The relational model of data is the most widely used model today.
      Main concept:  relation, basically a table with rows and columns.
      Every relation has a schema, which describes the columns, or fields.
       Levels of Abstraction
         Many views, single conceptual (logical) schema and physical schema.
      Views describe how users see the data.                                       
      Conceptual schema defines logical structure
      Physical schema describes the files and indexes used.
       Example: University Database
         Logical schema:                 
       Students(sid: string, name: string, login: string,
                          age: integer, gpa:real)
       Courses(cid: string, cname:string, credits:integer)
       Enrolled(sid:string, cid:string, grade:string)
         Physical schema:
      Relations stored as unordered files.
      Index on first column of Students.
         External Schema (View):
      Course_info(cid:string,enrollment:integer)
       Data Independence *
       Applications insulated from how data is structured and stored.
       Logical data independence:  Protection from changes in logical structure of data.
       Physical data independence:   Protection from changes in physical structure of data.


       Why Study the Relational Model?
       Most widely used model.
     Vendors: IBM, Informix, Microsoft, Oracle, Sybase, etc.
       Legacy systems in older models
     E.G., IBMs IMS
       Recent competitor: object-oriented model
     ObjectStore, Versant, Ontos
     A synthesis emerging: object-relational model
     Informix Universal Server, UniSQL, O2, Oracle, DB2
       Relational Database: Definitions
       Relational database: a set of relations
       Relation: made up of 2 parts:
     Instance : a table, with rows and columns.
#Rows = cardinality, #fields = degree / arity.
     Schema : specifies name of relation, plus name and type of each column.
     E.G. Students(sid: string, name: string, login: string,                        age: integer, gpa: real).
       Can think of a relation as a set of rows or tuples (i.e., all rows are distinct).
       Primary Key Constraints
        A set of fields is a key for a relation if :
1. No two distinct tuples can have same values in all key fields, and
2. This is not true for any subset of the key.
     Part 2 false? A superkey.
     If theres >1 key for a relation, one of the keys is chosen (by DBA) to be the primary key.
        E.g., sid is a key for Students.  (What about name?)  The set {sid, gpa} is a superkey.
        Every relation must have a key
      Example Instance of Students Relation
       Relational Query Languages
        A major strength of the relational model: supports simple, powerful querying of data.
        Queries can be written intuitively, and the DBMS is responsible for efficient evaluation.
     The key: precise semantics for relational queries.
     Allows the optimizer to extensively re-order operations, and still ensure that the answer does not change.
       The SQL Query Language
        Developed by IBM (system R) in the 1970s
        Need for a standard since it is used by many vendors
        Standards:
     SQL-86
     SQL-89 (minor revision)
     SQL-92 (major revision)
     SQL-99 (major extensions, current standard)

       The SQL Query Language
       To find all 18 year old students, we can write:
        Querying Multiple Relations
        What does the following query compute?
       Creating Relations in SQL
        Creates the Students          relation. Observe that the        type (domain)  of each field         is specified, and enforced by     the DBMS whenever tuples       are added or modified.
        As another example, the   Enrolled table holds information about courses   that students take.
       Adding and Deleting Tuples
       Can insert a single tuple using:
       Integrity Constraints (ICs)
        IC: condition that must be true for any instance of the database; e.g., domain constraints.
      ICs are specified when schema is defined.
      ICs are checked when relations are modified.
        A legal instance of a relation is one that satisfies all specified ICs. 
      DBMS should not allow illegal instances.
        If the DBMS checks ICs, stored data is more faithful to real-world meaning.
      Avoids data entry errors, too!
       Primary and Candidate Keys in SQL
        Possibly many candidate keys  (specified using UNIQUE), one of which is chosen as the primary key.
       Foreign Keys, Referential Integrity
         Foreign key : Set of fields in one relation that is used to `refer to a tuple in another relation.  (Must correspond to primary key of the second relation.)  Like a `logical pointer.
         E.g. sid is a foreign key referring to Students:
      Enrolled(sid: string, cid: string, grade: string)
      If all foreign key constraints are enforced,  referential integrity is achieved, i.e., no dangling references.
      Can you name a data model w/o referential integrity?
     Links in HTML! (XML)
       Foreign Keys in SQL
       Only students listed in the Students relation should be allowed to enroll for courses.
       Enforcing Referential Integrity
         Consider Students and Enrolled;  sid in Enrolled is a foreign key that references Students.
         What should be done if an Enrolled tuple with a non-existent student id is inserted?  (Reject it!)
         What should be done if a Students tuple is deleted?
      Also delete all Enrolled tuples that refer to it.
      Disallow deletion of a Students tuple that is referred to.
      Set sid in Enrolled tuples that refer to it to a default sid.
      (In SQL, also: Set sid in Enrolled tuples that refer to it to a special value null, denoting `unknown or `inapplicable.)
         Similar if primary key of Students tuple is updated.
       Referential Integrity in SQL
         SQL/92 and SQL:1999 support all 4 options on deletes and updates.
      Default is NO ACTION   (delete/update is rejected)
      CASCADE  (also delete all tuples that refer to deleted tuple)
      SET NULL / SET DEFAULT  (sets foreign key value of referencing tuple)
       Where do ICs Come From?
         ICs are based upon the semantics of the real-world enterprise that is being described in the database relations.
         We can check a database instance to see if an IC is violated, but we can NEVER infer that an IC is true by looking at an instance.
      An IC is a statement about all possible instances!
      From example, we know name is not a key, but the assertion that sid is a key is given to us.
         Key and foreign key ICs are the most common; more general ICs supported too.
       The Entity-Relationship Model
       Chapter 2
       Overview of Database Design
         Conceptual design:  (ER Model is used at this stage.)
      What are the entities and relationshipsin the enterprise?
      What information about these entities and relationships should we store in the database?
      What are the integrity constraints or business rules that hold?
      A database `schema in the ER Model can be represented pictorially (ER diagrams).
      Can map an ER diagram into a relational schema.
       ER Model Basics
         Entity:  Real-world object distinguishable from other objects. An entity is described (in DB) using a set of attributes.
         Entity Set:  A collection of similar entities.  E.g., all employees. 
      All entities in an entity set have the same set of attributes.  (Until we consider ISA hierarchies, anyway!)
      Each entity set has a key.
      Each attribute has a domain.
       ER Model Basics (Contd.)
         Relationship:  Association among two or more entities.  E.g., Attishoo works in Pharmacy department.
         Relationship Set:  Collection of similar relationships.
      An n-ary relationship set  R relates n entity sets E1 ... En; each relationship in R involves entities e1    E1, ..., en     En
     Same entity set could participate in different relationship sets, or in different roles in same set.
       Review: Key Constraints
         Each dept has at most one manager, according to the    key constraint on Manages.
       Logical DB Design: ER to Relational
       Entity sets to tables:

       Relationship Sets to Tables
         In translating a relationship set to a relation, attributes of the relation must include:
      Keys for each participating entity set  (as foreign keys).
     This set of attributes forms a superkeyfor the relation.
      All descriptive attributes.
       Logical DB Design: ER to Relational (Cont.)
       Not so simple!
        Some attributes are mapped to a separate relation
        Some relationships (1:1, !:many) are mapped to an extra attribute in the 1 relation
        See any standard DB book for details

       Summary of Conceptual Design
         Conceptual design follows requirements analysis,
      Yields a high-level description of data to be stored
         ER model popular for conceptual design
      Constructs are expressive, close to the way people think about their applications.
         Basic constructs: entities, relationships, and attributes (of entities and relationships).
         Some additional constructs: weak entities, ISA hierarchies, and aggregation.
         Note: There are many variations on ER model.
       Formal Relational Query Languages
        Two mathematical Query Languages form the basis for real languages (e.g. SQL), and for implementation:
   Relational Algebra:  More operational, very useful for representing execution plans.
   Relational Calculus:   Lets users describe what they want, rather than how to compute it.  (Non-operational, declarative.)

      Example Instances
        Sailors and Reserves relations for our examples.
        Well use positional or named field notation, assume that names of fields in query results are `inherited from names of fields in query input relations.
        Relational Algebra
         Basic operations:
      Selection  (     )    Selects a subset of rows from relation.
      Projection  (     )   Deletes unwanted columns from relation.
      Cross-product  (     )  Allows us to combine two relations.
      Set-difference  (     )  Tuples in reln. 1, but not in reln. 2.
      Union  (     )  Tuples in reln. 1 and in reln. 2.
         Additional operations:
      Intersection, join, division, renaming:  Not essential, but (very!) useful.
         Since each operation returns a relation, operations can be composed! (Algebra is closed.)
       Projection
         Deletes attributes that are not in projection list.
         Schema of result contains exactly the fields in the projection list, with the same names that they had in the (only) input relation.
         Projection operator has to eliminate duplicates!  (Why??)
      Note: real systems typically dont do duplicate elimination unless the user explicitly asks for it.  (Why not?)
       Selection
         Selects rows that satisfy selection condition.
         No duplicates in result!  (Why?)
         Schema of result identical to schema of (only) input relation.
         Result relation can be the input for another relational algebra operation!  (Operatorcomposition.)
       Union, Intersection, Set-Difference
         All of these operations take two input relations, which must be union-compatible:
      Same number of fields.
      `Corresponding fields have the same type.
         What is the schema of result?
       Cross-Product
        Each row of S1 is paired with each row of R1.
        Result schema has one field per field of S1 and R1, with field names `inherited if possible.
     Conflict:  Both S1 and R1 have a field called sid.
       Joins
         Condition Join:





         Result schema same as that of cross-product.
         Fewer tuples than cross-product, might be able to compute more efficiently
         Sometimes called a theta-join. 
       Joins
         Equi-Join:  A special case of condition join where the condition c contains only equalities.





         Result schema similar to cross-product, but only one copy of fields for which equality is specified.
         Natural Join:  Equijoin on all common fields.
       Division
         Not supported as a primitive operator, but useful for expressing queries like:                                                                                                         Find sailors who have reserved allboats.
         Let A have 2 fields, x and y; Bhave only field y:
      A/B =
      i.e., A/B contains all x tuples (sailors) such that for every y tuple (boat) in B, there is an xy tuple in A.
      Or:  If the set of y values (boats) associated with an x value (sailor) in Acontains all y values in B, the x value is in A/B.
         In general, x and y can be any lists of fields; y is the list of fields in B, and x    y is the list of fields of A.
       Examples of Division A/B
       Find names of sailors whove reserved boat #103
       Solution 1:  
       Find names of sailors whove reserved a red boat
       Information about boat color only available in Boats; so need an extra join:
       Find sailors whove reserved a red or a green boat
        Can identify all red or green boats, then find sailors whove reserved one of these boats:
       Find the names of sailors whove reserved all boats
        Uses division; schemas of the input relations to / must be carefully chosen:
       Other Relational Languages
       Relational calculus -tuple or domain (QBE)
       SQL -you already know!
       Summary
        The relational model has rigorously defined query languages that are simple and powerful.
        Relational algebra is more operational; useful as internal representation for query evaluation plans.
        Several ways of expressing a given query; a query optimizer should choose the most efficient version.
       Data Storage Review

       Chapter 8

       Disks and Files
        DBMS stores information on (hard) disks.
        This has major implications for DBMS design!
     READ: transfer data from disk to main memory (RAM).
     WRITE: transfer data from RAM to disk.
     Both are high-cost operations, relative to in-memory operations, so must be planned carefully!
       Why Not Store Everything in Main Memory?
        Costs too much.  $1000 will buy you either 128MB of RAM or 7.5GB of disk today.
        Main memory is volatile.  We want data to be saved between runs.  (Obviously!)
        Typical storage hierarchy:
     Main memory (RAM) for currently used data.
     Disk for the main database (secondary storage).
     Tapes for archiving older versions of the data (tertiary storage).
       Disks
        Secondary storage device of choice.
        Main advantage over tapes:  random access vs. sequential.
        Data is stored and retrieved in units called disk blocks or pages.
        Unlike RAM, time to retrieve a disk page varies depending upon location on disk. 
     Therefore, relative placement of pages on disk has major impact on DBMS performance!
       Components of a Disk
       Accessing a Disk Page
         Time to access (read/write) a disk block:
       seek time (moving arms to position disk head on track)
       rotational delay (waiting for block to rotate under head)
       transfer time (actually moving data to/from disk surface)
         Seek time and rotational delay dominate.
       Seek time varies from about 1 to 20msec
       Rotational delay varies from 0 to 10msec
       Transfer rate is about 1msec per 4KB page
         Key to lower I/O cost: reduce seek/rotation delays!  Hardware vs. software solutions?
       Arranging Pages on Disk
        `Next block concept: 
     blocks on same track, followed by
     blocks on same cylinder, followed by
     blocks on adjacent cylinder
        Blocks in a file should be arranged sequentially on disk (by `next), to minimize seek and rotational delay.
        For a sequential scan, pre-fetching several pages at a time is a big win!
       RAID
        Disk Array: Arrangement of several disks that gives abstraction of a single, large disk.
        Goals: Increase performance and reliability.
        Two main techniques:
     Data striping: Data is partitioned; size of a partition is called the striping unit. Partitions are distributed over several disks.
     Redundancy: More disks => more failures. Redundant information allows reconstruction of  data if a disk fails.
       RAID Levels
        Level 0: No redundancy
        Level 1: Mirrored (two identical copies)
     Each disk has a mirror image (check disk)
     Parallel reads, a write involves two disks.
     Maximum transfer rate = transfer rate of one disk
        Level 0+1: Striping and Mirroring
     Parallel reads, a write involves two disks.
     Maximum transfer rate = aggregate bandwidth


       RAID Levels (Contd.)
        Level 3: Bit-Interleaved Parity
     Striping Unit: One bit. One check disk.
     Each read and write request involves all disks; disk array can process one request at a time.
        Level 4: Block-Interleaved Parity
     Striping Unit: One disk block. One check disk.
     Parallel reads possible for small requests, large requests can utilize full bandwidth
     Writes involve modified block and check disk
        Level 5: Block-Interleaved Distributed Parity
     Similar to RAID Level 4, but parity blocks are distributed over all disks

       Disk Space Management
        Lowest layer of DBMS software manages space on disk.
        Higher levels call upon this layer to:
     allocate/de-allocate a page
     read/write a page
        Request for a sequence of pages must be satisfied by allocating the pages sequentially on disk!  Higher levels dont need to know how this is done, or how free space is managed.
       Disk Space Management
Popular Methods:
        Linked lists free blocks list (First fit, Best fit, etc.)

        Inodes in Unix Tree of free block addresses

        Bit map

        Note, sometimes access to logical block i, involves several physical blocks (e.g. Unix)!
       Buffer Management in a DBMS
         Data must be in RAM for DBMS to operate on it!
         Table of <frame#, pageid> pairs is maintained.
       When a Page is Requested ...
        If requested page is not in pool:
     Choose a frame for replacement
     If  frame is dirty, write it to disk
     Read requested page into chosen frame
        Pin the page and return its address. 
       More on Buffer Management
        Requestor of page must unpin it, and indicate whether page has been modified:
     dirty bit is used for this.
        Page in pool may be requested many times,
     a pin count is used.  A page is a candidate for replacement iff pin count = 0.
        CC & recovery may entail additional I/O when a frame is chosen for replacement. (Write-Ahead Log protocol; more later.)
       Buffer Replacement Policy
        Frame is chosen for replacement by a replacement policy:
     Least-recently-used (LRU), Clock, MRU etc.
        Policy can have big impact on # of I/Os; depends on the access pattern.
        Sequential flooding:  Nasty situation caused by LRU + repeated sequential scans.
     # buffer frames < # pages in file means each page request causes an I/O.  MRU much better in this situation (but not in all situations, of course).
       DBMS vs. OS File System
    OS does disk space & buffer mgmt: why not let OS manage these tasks?

         Differences in OS support: portability issues
         Some limitations, e.g., files cant span disks.
         Buffer management in DBMS requires ability to:
      pin a page in buffer pool, force a page to disk (important for implementing CC & recovery),
      adjust replacement policy, and pre-fetch pages based on access patterns in typical DB operations.
       Record Formats:  Fixed Length
        Information about field types same for all records in a file; stored in system catalogs.
        Finding ith field does not require scan of record.
       Record Formats: Variable Length
        Two alternative formats (# fields is fixed):
      Page Formats: Fixed Length Records
*        Record id = <page id, slot #>.  In first alternative, moving records for free space management changes rid; may not be acceptable.
      Page Formats: Variable Length Records
*        Can move records on page without changing rid; so, attractive for fixed-length records too.
       Files of Records
        Page or block is OK when doing I/O, but higher levels of DBMS operate on records, and files of records.
        FILE: A collection of pages, each containing a collection of records. Must support:
     insert/delete/modify record
     read a particular record (specified using record id)
     scan all records (possibly with some conditions on the records to be retrieved)
       Unordered (Heap) Files
        Simplest file structure contains records in no particular order.
        As file grows and shrinks, disk pages are allocated and de-allocated.
        To support record level operations, we must:
      keep track of the pages in a file
      keep track of free space on pages
      keep track of the records on a page
        There are many alternatives for keeping track of this.
       Heap File Implemented as a List
         The header page id and Heap file name must be stored someplace.
         Each page contains 2 `pointers plus data.
       Heap File Using a Page Directory
         The entry for a page can include the number of free bytes on the page.
         The directory is a collection of pages; linked list implementation is just one alternative.
      Much smaller than linked list of all HF pages!
       Alternative File Organizations
Many alternatives exist, each ideal for some situations, and not so good in others:
      Heap (random order) files:  Suitable when typical access is a file scan retrieving all records.
      Sorted Files:  Best if records must be retrieved in some order, or only a `range of records is needed.
      Indexes: Data structures to organize records via trees or hashing. 
      Like sorted files, they speed up searches for a subset of records, based on values in certain (search key) fields
      Updates are much faster than in sorted files.
       Cost Model for Our Analysis
We ignore CPU costs, for simplicity:
     B:  The number of data pages
     R:  Number of records per page
     D:  (Average) time to read or write disk page
     Measuring number of page I/Os ignores gains of pre-fetching a sequence of pages; thus, even I/O cost is only approximated.  
     Average-case analysis; based on several simplistic assumptions.
       Assumptions in Our Analysis
        Heap Files:
      Equality selection on key; exactly one match.
        Sorted Files:
      Files compacted after deletions.
        Indexes:
    Alt (2), (3): data entry size = 10% size of record
      Hash: No overflow buckets.
      80% page occupancy => File size = 1.25 data size
      Tree: 67% occupancy (this is typical).
      Implies file size =  1.5 data size

       Assumptions (contd.)
       Scans:
   Leaf levels of a tree-index are chained.
   Index data-entries plus actual file scanned for unclustered indexes.
       Range searches:
   We use tree indexes to restrict the set of data records fetched, but ignore hash indexes.
       Cost of Operations
       Cost of Operations
       Indexes
        A heap file allows us to retrieve records:
   By specifying the rid, or
   By scanning all records sequentially.
        Sometimes, we want to retrieve records by specifying the values in one or more fields, e.g,
   Find all students in the CS department
   Find all students with gpa > 3
        Indexes are file structures that enable us to answer such value-based queries efficiently.

       Indexes
         An index on a file speeds up selections on the search key fields for the index.
      Any subset of the fields of a relation can be the search key for an index on the relation.
      Search key is not the same as key (minimal set of fields that uniquely identify a record in a relation).
         An index contains a collection of data entries, and supports efficient retrieval of all data entries k* with a given key value k.
    Given data entry k*, we can find record with key k in at most one disk I/O.  (not really but ok for us)
      Alternatives for Data Entry k* in Index
         In a data entry k* we can store:
     Data record with key value k, or
     <k, rid of data record with search key value k>, or
     <k, list of rids of data records with search key k>
         Choice of alternative for data entries is orthogonal to the indexing technique used to locate data entries with a given key value k.
    Examples of indexing techniques: B+ trees, hash-based structures
    Typically, index contains auxiliary information that directs searches to the desired data entries
       Alternatives for Data Entries (Contd.)
         Alternative 1:
      If this is used, index structure is a file organization for data records (instead of a Heap file or sorted file).
      At most one index on a given collection of data records can use Alternative 1.  (Otherwise, data records are duplicated, leading to redundant storage and potential inconsistency.)
      If data records are very large,  # of pages containing data entries is high.  Implies size of auxiliary information in the index is also large, typically.
       Alternatives for Data Entries (Contd.)
         Alternatives 2 and 3:
      Data entries typically much smaller than data records.  So, better than Alternative 1 with large data records, especially if search keys are small. (Portion of index structure used to direct search, which depends on size of data entries, is much smaller than with Alternative 1.)
      Alternative 3 more compact than Alternative 2, but leads to variable sized data entries even if search keys are of fixed length.
       Index Classification
         Primary vs. secondary:  If search key contains primary key, then called primary index.
      Unique index:  Search key contains a candidate key.
         Clustered vs. unclustered:  If order of data records is the same as, or `close to, order of data entries, then called clustered index.
      Alternative 1 implies clustered; in practice, clustered also implies Alternative 1 (since sorted files are rare).
      A file can be clustered on at most one search key.
      Cost of retrieving data records through index varies greatly based on whether index is clustered or not!
      Clustered vs. Unclustered Index
         Suppose that Alternative (2) is used for data entries, and that the data records are stored in a Heap file.
        To build clustered index, first sort the Heap file (with some free space on each page for future inserts). 
      Overflow pages may be needed for inserts.  (Thus, order of data recs is `close to, but not identical to, the sort order.)
       Index Classification cont.
          Composite Search Keys: Search on a combination of fields.
       Equality query: Every field value is equal to a constant value. E.g. wrt <sal,age> index:
      age=20 and sal =75
       Range query: Some field value is not a constant. E.g.:
      age =20; or age=20 and sal > 10
          Data entries in index sorted by search key to support range queries.
       Lexicographic order, or
       Spatial order.
Classes of Indexes

q  Unique/non-unique whether search key is unique/non-unique
q  Dense/Nondense every/not every search key (every record for Unique key)  in the data file has a corresponding pointer in the index.
q  Clustered/Nonclustered order of index search key is equal/not equal to file order.
q  Primary/Secondary search key equal/not equal to primary key. Also, the answer is a single/set of data file records.

§   Note: for Unique key Dense index indexes every record, Non-dense does not index every record
§   Note: Non-dense index must be clustered!


Dense Clustered Index File
qDense index Index record appears for every search-key value in the file.
       Example of Sparse Clustered Index File
Sparse Index Files
q  Sparse Index:  contains index records for only some search-key values.
§  Applicable when records are sequentially ordered on search-key
q  To locate a record with search-key value K we:
§  Find index record with largest search-key value < K
§  Search file sequentially starting at the record to which the index record points
q  Less space and less maintenance overhead for insertions and deletions.
q  Generally slower than dense index for locating records.
q  Good tradeoff: sparse index with an index entry for every block in file, corresponding to least search-key value in the block. (common for Hash indexes!)
       Note on Alternatives cost
         Alternative 1:  entries are large (records), index is bigger (height), but avoids the +1!

         Alternatives 2,3:  entries are small (key+pointer), index is smaller, but need the +1 (m)
       System Catalogs
        For each index:
     structure (e.g., B+ tree) and search key fields
        For each relation:
     name, file name, file structure (e.g., Heap file)
     attribute name and type, for each attribute
     index name, for each index
     integrity constraints
        For each view:
     view name and definition
        Plus statistics, authorization, buffer pool size, etc.
       Attr_Cat(attr_name, rel_name, type, position)
       Summary
         Disks provide cheap, non-volatile storage.
      Random access, but cost depends on location of page on disk; important to arrange data sequentially to minimize seek and rotation delays.
         Buffer manager brings pages into RAM.
      Page stays in RAM until released by requestor.
      Written to disk when frame chosen for replacement (which is sometime after requestor releases the page).
      Choice of frame to replace based on replacement policy.
      Tries to pre-fetch several pages at a time.
       Summary (Contd.)
         DBMS vs. OS File Support
      DBMS needs features not found in many OSs, e.g., forcing a page to disk, controlling the order of page writes to disk, files spanning disks, ability to control pre-fetching and page replacement policy based on predictable access patterns, etc.
         Variable length record format with field offset directory offers support for direct access to ith field and null values.
         Slotted page format supports variable length records and allows records to move on page.
       Summary (Contd.)
         File layer keeps track of pages in a file, and supports abstraction of a collection of records.
      Pages with free space identified using linked list or directory structure (similar to how pages in file are kept track of).
         Indexes support efficient retrieval of records based on the values in some fields.
         Catalog relations store information about relations, indexes and views.  (Information that is common to all records in a given collection.)
       Summary (cont.)
        Many alternative file organizations exist, each appropriate in some situation.
        If selection queries are frequent, sorting the file or building an index is important.
     Hash-based indexes only good for equality search.
     Sorted files and tree-based indexes best for range search; also good for equality search.  (Files rarely kept sorted in practice; B+ tree index is better.)
        Index is a collection of data entries plus a way to quickly find entries with given key values.
       Summary (Contd.)
        Data entries can be actual data records, <key, rid> pairs, or <key, rid-list> pairs.
     Choice orthogonal to indexing technique used to locate data entries with a given key value.
        Can have several indexes on a given file of data records, each with a different search key.
        Indexes can be classified as clustered vs. unclustered, primary vs. secondary, and dense vs. sparse.  Differences have important consequences for utility/performance.
       DBMS Concepts Review



      Structure of a DBMS
        A typical DBMS has a layered architecture.
        The figure does not show the concurrency control and recovery components.
        This is one of several possible architectures; each system has its own variations.
      Transaction: An Execution of a DB Program
         Key concept is transaction, which is an atomicsequence of database actions (reads/writes).
         Each transaction, executed completely, must leave the DB in a consistent state if DB is consistent when the transaction begins.
      Users can specify some simple integrity constraints on the data, and the DBMS will enforce these constraints.
      Beyond this, the DBMS does not really understand the semantics of the data.  (e.g., it does not understand how the interest on a bank account is computed).
      Thus, ensuring that a transaction (run alone) preserves consistency is ultimately the users responsibility!
       Concurrency Control
        Concurrent execution of user programs             is essential for good DBMS performance.
     Because disk accesses are frequent, and relatively slow, it is important to keep the cpu humming by working on several user programs concurrently.
        Interleaving actions of different user programs can lead to inconsistency: e.g., check is cleared while account balance is being computed.
        DBMS ensures such problems dont arise:  users can pretend they are using a single-user system.
       Example
         Consider two transactions (Xacts):
      Scheduling Concurrent Transactions
         DBMS ensures that execution of {T1, ... , Tn} is equivalent to some serial execution T1 ... Tn.
      Before reading/writing an object, a transaction requests a lock on the object, and waits till the DBMS gives it the lock.  All locks are released at the end of the transaction.  (Strict 2PL locking protocol.)
      Idea: If an action of Ti (say, writing X) affects Tj (which perhaps reads X), one of them, say Ti, will obtain the lock on X first and Tj is forced to wait until Ti completes; this effectively orders the transactions.
      What if Tj already has a lock on Y and Ti later requests a lock on Y? (Deadlock!) Ti or Tj is aborted and restarted!
       Ensuring Atomicity
         DBMS ensures atomicity (all-or-nothing property) even if system crashes in the middle of a Xact.
         Idea: Keep a log (history) of all actions carried out by the DBMS while executing a set of Xacts:
      Before a change is made to the database, the corresponding log entry is forced to a safe location.  (WAL protocol; OS support for this is often inadequate.)
      After a crash, the effects of partially executed transactions are undone using the log. (Thanks to WAL, if log entry wasnt saved before the crash, corresponding change was not applied to database!)
       The Log
         The following actions are recorded in the log:
      Ti writes an object:  The old value and the new value.
     Log record must go to disk beforethe changed page!
      Ti commits/aborts:  A log record indicating this action.
         Log records chained together by Xact id, so its easy to undo a specific Xact (e.g., to resolve a deadlock).
         Log is often duplexed and archived on stable storage.
         All log related activities (and in fact, all CC related activities such as lock/unlock, dealing with deadlocks etc.) are handled transparently by the DBMS.
       Important historical systems (1970s)          
         Ingres U. of California Berkeley [J1,J2]

         System R IBM research center San Jose [J3]

         Invented most of the concepts of relational systems of today. Had a tremendous impact on commercial systems and the SQL standard
       Highlights of Ingres   
         Process structure
    4 processes, communication between processes (limited address space)
         Query language QUEL Relational Calc
         Catalog (data dictionary) as Relations
    Many Unix files
         Access-types
    Hashing & Isam, B-trees later
         Query optimization
    Decomposition Inefficient!
         Security
    Query modification.
       Highlights of system R
       Query language SEQUEL - >SQL
   Full power, general + embedded language
       Physical structure
   Modular, B-tree, Clustered vs unclustered
       Query optimization
   Cost-based very powerful
       Security
   Views, grant/revoke
       Concurrency control/recovery
   First general solutions
       The compilation approach
       Databases make these folks happy ...
       End users and DBMS vendors
       DB application programmers
     E.g., smart webmasters
       Database administrator (DBA)
     Designs logical /physical schemas
     Handles security and authorization
     Data availability, crash recovery
     Database tuning as needs evolve
       Summary
        DBMS used to maintain, query large datasets.
        Benefits include recovery from system crashes, concurrent access, quick application development, data integrity and security.
        Levels of abstraction give data independence.
        A DBMS typically has a layered architecture.
        DBAs hold responsible jobs                                and are well-paid! J
        DBMS R&D is one of the broadest,                                              most exciting areas in CS.


0 komentar:

Posting Komentar

Posting Kami