The ocdm_sys schema ceded with thedailysplash.tv communications Data version was designed and also defined following best practices because that data accessibility and performance. Continue to usage these practices when you add brand-new physical objects.

You are watching: A surrogate key may be appropriate under which of the following circumstances?

This section provides information around how decisions about the adhering to physical design aspects were made come the default thedailysplash.tv communications Data Model:


A tablespace consists of one or an ext data files, which space physical structures within the operating system you are using.


Recommendations: specifying Tablespaces

If possible, specify tablespaces so that they stand for logical service units.

Use ultra large data documents for a far-reaching improvement in very huge thedailysplash.tv interactions Data model warehouse.


You can change the tablespace and also partitions offered by thedailysplash.tv interactions Data version tables. What you execute depends on even if it is the thedailysplash.tv communications Data design table has partitions:

For tables that carry out not have actually partitions (that is, lookup tables and also reference tables), friend can change the currently tablespace for a table.

By default, thedailysplash.tv communications Data Model defines the partitioned tables as interval partitioning, which way the partitions are created only when brand-new data arrives.

Consequently, for thedailysplash.tv interactions Data model tables that have partitions (that is, Base, Derived, and accumulation tables), for the brand-new interval partitions to be created in brand-new tablespaces rather than existing ones, worry the adhering to statements:

ALTER TABLE table_name modify DEFAULT attributes TABLESPACE new_tablespace_name; When new data is inserted in the table mentioned by table_name, a brand-new partition is automatically created in the tablespace stated by tablespace new_tablespace_name.

For tables that have actually partitions (that is, base, derived, and aggregate tables), you have the right to specify that brand-new interval partitions be produced into brand-new tablespaces.

For thedailysplash.tv interactions Data design tables that do not have partitions (that is, lookup tables and reference tables), to change the currently tablespace for a table then issue the complying with statement:

ALTER TABLE table_name move TABLESPACE new_tablespace_name;
A vital decision the you should make is even if it is to compress her data. Utilizing table compression reduces disk and also memory usage, often resulting in a far better scale-up power for read-only operations. Table compression can likewise speed up query execution by minimizing the variety of round trips forced to retrieve data from the disks. Compressing data yet imposes a power penalty on the fill speed of the data. Many of the basic tables in the thedailysplash.tv interactions Data design are compressed tables.


Recommendations: Data Compression

In general, select to compress the data. The all at once performance gain frequently outweighs the price of compression.

If you decision to usage compression, think about sorting your data before loading the to attain the best feasible compression rate. The easiest means to sort incoming data is to load it utilizing an order BY i on either your CTAS or IAS statement. Specify an order BY a no NULL tower (ideally no numeric) the has countless distinct worths (1,000 come 10,000).


Basic or typical Compression

With standard compression thedailysplash.tv Database compresses data by remove duplicate worths in a database block. Standard compression only works for direct path work (CTAS or IAS). If the data is modified using any kind of typical DML operation (for example updates), the data within that database block is uncompressed to do the modifications and is written earlier to the disc uncompressed.


By using a compression algorithm particularly designed because that relational data, thedailysplash.tv Database have the right to compress data effectively and in together a way that thedailysplash.tv Database incurs virtually no performance penalty because that SQL queries accessing compressed tables.

thedailysplash.tv interactions Data design leverages the compress feature for all base, derived, and aggregate tables i m sorry reduces the lot of data being stored, reduces memory usage (more data per memory block), and also increases ask performance.

You can specify table compression by making use of the COMPRESS i of the develop TABLE statement or girlfriend can allow compression for an currently table through using change TABLE statement as shown below.

alter table move compress;
Advanced row Compression

Advanced row compression is a ingredient of the progressed Compression option. With advanced row compression, just like standard compression, thedailysplash.tv Database compresses data by removed duplicate values in a database block. However unlike traditional compression progressed row compression enables data to stay compressed during all varieties of data manipulation operations, including conventional DML such as INSERT and UPDATE.


See Also:

For information about thedailysplash.tv progressed Compression, view the "Using Table Compression to save Storage Costs" OBE tutorial.

To access the tutorial, open up the thedailysplash.tv learning Library in your browser by adhering to the indict in "Related thedailysplash.tv Resources"; and, then, find for the tutorial by name.


Hybrid Columnar Compression (HCC)

HCC is easily accessible with some storage formats and achieves its compression utilizing a reasonable construct called the compression unit which is provided to save a set of hybrid columnar-compressed rows. As soon as data is loaded, a set of rows is pivoted into a columnar representation and also compressed. After the column data because that a collection of rows has been compressed, that is fit into the compression unit. If a traditional DML is issued against a table v HCC, the essential data is uncompressed to carry out the alteration and then written ago to the disk utilizing a block-level compression algorithm.


Tip:

If her data set is generally modified using conventional DML, climate the use of HCC is not recommended; instead, the usage of advanced row compression is recommended.


HCC provides various levels of compression, focusing on query power or compression proportion respectively. With HCC optimized because that query, under compression algorithms are used to the data come achieve an excellent compression with tiny to no power impact. However, compression for archive tries come optimize the compression ~ above disk, regardless of of that is potential influence on the query performance.


A subtype is a sub-grouping of the entities in an entity kind that is meaningful to the organization and also that shares common qualities or relationships distinctive from various other subgroups.

A supertype is a share entity type that has a connection with one or more subtypes.

Subtypes inherit all supertype attributes

Subtypes have qualities that are various from various other subtypes


Create different tables because that the super form and all sub form entities for the complying with reasons:

Data integrity implemented at database level. (using no NULL pillar constraints)

Relationships can be correctly modeled and enforced consisting of those which apply to just one subtype

Physical model very closely resembles the reasonable data model.

It is less complicated to correlate the reasonable data version with the physical data model and also support the logical data model enhancements and also changes.

Physical data model reflects true company rules (for example, if there are some qualities or relationship mandatory for just one subtype.)


Describes benefits and defect of the surrogate an essential method for primary key construction that entails taking the natural crucial components from the source systems and mapping them with a procedure of assigning a unique an essential value to each unique combination of natural vital components (including resource system identifier).

The result primary key value is fully non-intelligent and also is frequently a numeric data form for maximum performance and also storage efficiency.


Ensure uniqueness: data distribution

Independent of resource systems

Re-numbering

Overlapping ranges

Uses the numeric data type which is the many performant data form for main keys and also joins


Have to allocate throughout ETL

Complex and expensive re-processing and data high quality correction

Not provided in queries – performance impact

The operational company intelligence requires organic keys to sign up with to to work systems


Integrity limit are provided to enforce service rules connected with your database and also to prevent invalid details in the tables. The most common species of border include:

PRIMARY crucial constraints, this is usually characterized on the surrogate crucial column come ensure uniqueness that the record identifiers. In general, the is recommended the you point out the ENFORCED allowed RELY mode.

UNIQUE constraints, come ensure that a provided column (or collection of columns) is unique. Because that slowly changing dimensions, that is recommended the you include a distinctive constraint on the Business key and the effective From date columns to permit tracking many versions (based top top surrogate key) the the same Business crucial record.

NOT NULL constraints, come ensure that no null values room allowed. For query rewrite scenarios, the is recommended that you have an inline explicit no NULL constraint top top the primary key column in enhancement to the primary crucial constraint.

FOREIGN key constraints, to ensure the relation in between tables room being honored by the data. Typically in data warehousing environments, the foreign crucial constraint is present in rely DISABLE NOVALIDATE mode.

The thedailysplash.tv Database supplies constraints once optimizing SQL queries. Back constraints can be advantageous in many facets of query optimization, border are specifically important because that query rewrite of shown up views. Under some certain circumstances, border need an are in the database. These constraints are in the kind of the underlying distinctive index.

Unlike data in countless relational database environments, data in a data warehouse is typically included or modified under regulated circumstances during the extraction, transformation, and also loading (ETL) process.


Indexes space optional structures linked with tables or clusters. In addition to the classic B-tree indexes, bitmap indexes are really common in data warehousing environments.

Bitmap indexes room optimized index frameworks for set-oriented operations. Additionally, castle are important for some optimized data accessibility methods such together star transformations. Bitmap indexes are frequently only a fraction of the dimension of the indexing data in the table.

B-tree indexes room most reliable for high-cardinality data: the is, for data with many possible values, such as customer surname or phone number. However, completely indexing a huge table through a traditional B-tree index deserve to be prohibitively high-quality in regards to disk space because the indexes can be number of times bigger than the data in the table. B-tree indexes can be stored especially in a compressed path to permit huge space savings, storing more keys in every index block, which likewise leads to much less I/O and better performance.


Make the bulk of the indexes in her customized thedailysplash.tv communications Data design bitmap indexes.

Use B-tree indexes just for distinctive columns or other columns with very high cardinalities (that is, columns that are virtually unique). Save the B-tree indexes in a compressed manner.

Partition the indexes. Indexes are just like tables in the you deserve to partition them, return the partitioning strategy is no dependent top top the table structure. Partitioning indexes renders it easier to manage the data warehouse during refresh and also improves query performance.

Typically, specify the index on a partitioned table as local. Bitmap indexes top top partitioned tables must always be local. B-tree indexes on partitioned tables deserve to be an international or local. However, in a data warehouse environment, regional indexes are much more common than worldwide indexes. Use global indexes only as soon as there is a specific requirement which can not be met by local indexes (for example, a distinct index top top a nonpartitioning key, or a performance requirement).


Partitioning allows a table, index or index-organized table to it is in subdivided into smaller pieces. Each piece of the database thing is called a partition.

Each partition has its very own name, and may optionally have its very own storage characteristics. Native the perspective of a database administrator, a partitioned object has multiple pieces that deserve to be regulated either jointly or individually. This gives the administrator significant flexibility in regulating partitioned objects. However, from the perspective of the application, a partitioned table is the same to a nonpartitioned table. No modifications are necessary when accessing a partitioned table using SQL DML commands.

As discussed in the complying with topics, partitioning can carry out tremendous benefits to a wide variety of applications by improving manageability, availability, and performance:


Note:

To recognize the various partitioning methods in thedailysplash.tv Database, check out the "Manipulating Partitions in thedailysplash.tv Database 11g" OBE tutorial.

To access the tutorial, open the thedailysplash.tv learning Library in your internet browser by adhering to the instructions in "thedailysplash.tv technology Network"; and, then, search for the indict by name.


Range partitioning helps improve the manageability and ease of access of large volumes that data.

Consider the case where 2 year"s worth of sales data or 100 terabytes (TB) is save on computer in a table. At the end of every day a new batch the data have to be to loaded into the table and also the oldest days precious of data need to be removed. If the Sales table is range partitioned by day climate the brand-new data have the right to be loaded making use of a partition exchange load. This is a sub-second operation that has small or no affect on finish user queries.

thedailysplash.tv communications Data version uses expression Partitioning together an extension of variety Partitioning, so the you administer just the very first partition greater limit and interval to develop the first partition and also the following partitions are created instantly as and when data comes. The (hidden) assumption is the the data flow is much more or less comparable over the various intervals.


Range partitioning additionally helps for sure that only the vital data come answer a ask is scanned. Assuming the the organization users predominately accesses the sales data top top a weekly basis (for example, total sales every week) then selection partitioning this table by work ensures that the data is accessed in the most reliable manner, as only seven partitions have to be scanned come answer the service users query rather of the entire table. The capacity to protect against scanning irregularity partitions is recognized as partition pruning.


Sub-partitioning by hash is supplied predominately for power reasons. Thedailysplash.tv Database supplies a direct hashing algorithm to develop sub-partitions.

A significant performance benefit of hash partitioning is partition-wise joins. Partition-wise joins minimize query response time by minimizing the quantity of data exchanged amongst parallel execution servers as soon as joins execute in parallel. This considerably reduces solution time and improves both CPU and also memory source usage. In a clustered data warehouse, this substantially reduces response times by limiting the data web traffic over the interconnect (IPC), i beg your pardon is the an essential to achieving good scalability for massive join operations. Partition-wise joins deserve to be complete or partial, relying on the partitioning system of the tables to it is in joined.

As portrayed a complete partition-wise sign up with divides a join in between two big tables into multiple smaller joins. Each smaller sized join, performs a join on a pair that partitions, one for each of the tables gift joined. For the optimizer to select the complete partition-wise sign up with method, both tables must be equi-partitioned top top their join keys. The is, they have to be partitioned on the same column with the very same partitioning method. Parallel execution the a complete partition-wise sign up with is comparable to the serial execution, except that rather of authorized one partition pair at a time, multiple partition pairs space joined in parallel by many parallel questions servers. The number of partitions joined in parallel is determined by the level of Parallelism (DOP).


Figure 2-2 Partitioning for join Performance

*
Description that "Figure 2-2 Partitioning for join Performance"

To ensure the the data it s okay evenly distributed amongst the hash partitions, that is very recommended that the number of hash partitions is a power of 2 (for example, 2, 4, 8, and also so on). A an excellent rule of ignorance to follow as soon as deciding the variety of hash partitions a table should have is 2 X # that CPUs rounded to approximately the nearest power of 2.

If her system has actually 12 CPUs, climate 32 have the right to be a great number of hash partitions. On a clustered system the exact same rules apply. If you have 3 nodes each v 4 CPUs, then 32 have the right to still it is in a good number the hash partitions. However, ensure the each hash partition is at least 16MB in size. Many tiny partitions perform not have efficient scan rates with parallel query. Consequently, if utilizing the variety of CPUs provides the dimension of the hash partitions also small, use the number of thedailysplash.tv RAC nodes in the atmosphere (rounded come the nearest strength of 2) instead.


Parallel Execution permits a database task to be parallelized or separated into smaller sized units the work, thus allowing multiple processes to work concurrently. By utilizing parallelism, a terabyte that data have the right to be scanned and also processed in minute or less, not hrs or days.

Figure 2-3 illustrates the parallel execution that a full partition-wise join in between two tables, Sales and also Customers. Both tables have the same degree of parallelism and also the same variety of partitions. Castle are range partitioned ~ above a date field and also sub partitioned by hash on the cust_id field. As shown in the picture, each partition pair is check out from the database and also joined directly.

There is no data redistribution necessary, therefore minimizing IPC communication, especially across nodes. Listed below figure mirrors the execution plan for this join.


Figure 2-3 Parallel Execution that a complete Partition-Wise Join between Two Tables

*
Description the "Figure 2-3 Parallel Execution that a complete Partition-Wise Join between Two Tables"

To ensure that you get optimal performance as soon as executing a partition-wise sign up with in parallel, specify a number for the partitions in every of the tables the is larger than the degree of parallelism used for the join. If over there are more partitions 보다 cluster database servers, each cluster database is provided one pair the partitions to join, once the cluster database completes the join, it requests another pair that partitions to join. This procedure repeats until all pairs are processed. This an approach enables the pack to be balanced dynamically (for example, 128 partitions v a level of parallelism of 32).

What wake up if just one table that you space joining is partitioned? In this case the optimizer picks a partial partition-wise join. Unlike full partition-wise joins, partial partition-wise joins deserve to be applied if only one table is partitioned ~ above the sign up with key. Hence, partial partition-wise join are an ext common than full partition-wise joins. To execute a partial partition-wise join, thedailysplash.tv Database dynamically repartitions the various other table based on the partitioning strategy that the partitioned table.

After the other table is repartitioned, the execution is similar to a full partition-wise join. The redistribution operation involves exchanging rows between parallel execution servers. This operation leads come interconnect traffic in thedailysplash.tv RAC environments, due to the fact that data must be repartitioned across node boundaries.


Figure 2-4 Partial Partition-Wise Join

*
Description the "Figure 2-4 Partial Partition-Wise Join"

Figure 2-4 illustrates a partial partition-wise join. It offers the same instance as in number 2-3, other than that the client table is not partitioned. Before the join operation is executed, the rows native the client table space dynamically redistributed ~ above the sign up with key.


Parallel ask is the most generally used parallel execution function in thedailysplash.tv Database. Parallel execution can considerably reduce the elapsed time for large queries. To enable parallelization for whole session, execute the following statement:

alter session enable parallel query;

Data Manipulation Language (DML) operations such together INSERT, UPDATE, and also DELETE deserve to be parallelized through thedailysplash.tv Database. Parallel execution can speed up huge DML operations and is an especially advantageous in data warehousing environments. To permit parallelization the DML statements, execute the adhering to statement:

alter session enable parallel dml;When you concern a DML statement such as an INSERT, UPDATE, or DELETE, thedailysplash.tv Database applies a collection of rules to determine whether the statement can be parallelized. The rules vary depending on whether the declare is a DML INSERT statement, or a DML update or DELETE statement.


The adhering to rules apply when determining exactly how to parallelize DML UPDATE and DELETE statements:

thedailysplash.tv Database have the right to parallelize UPDATE and DELETE declaration on partitioned tables, but only as soon as multiple partitions space involved.

You cannot parallelize upgrade or DELETE to work on a nonpartitioned table or when such operations impact only a solitary partition.

The complying with rules apply when determining exactly how to parallelize DML INSERT statements:

Standard INSERT statements using a worths clause can not be parallelized.

See more: Converse Orange Converse High Top, Orange Shoes

thedailysplash.tv Database have the right to parallelize just INSERT . . . Select . . . From statements.


When using parallel query, also allow parallelism at the table level by issuing the following statement: