ORA-01655: unable to extend cluster SYS.C_FILE#_BLOCK# by 128 in tablespace
SYSTEM
I encountered the above error while creating the partitioned tables. In my partitioned table I was creating LOCAL indexes for the partitioned tables. While creating the indexes, Oracle was throwing the above error. The script was organized to create a partitioned tables and then create associated indexes (Local or Global) and that repeats for all of the table and partition creation statements.
The error confused me initially as the tables were creating okay with GLOBAL indexes but all the LOCAL indexes were getting failed. I checked the SYSTEM tablespace and that really went out of space, it had just 1 MB free space. As there are no enough spaces it makes sense that while creating tables or indexes, Oracle was not able to get enough spaces to insert the data dictionary entries. But the question came to my mind that in that case how come my table and and GLOBAL indexes were creating fine but soon after the LOCAL indexes creation were getting failed. I checked the LOCAL indexes so far created with no issue and found that they're residing in the schema's default tablespace not in the SYSTEM tablespace. Then after some research and R&D with SYSTEM tablespace I came to the understanding that partitioned table's metadata are stored in the SYSTEM tablespace and which was not available out off 1 MB free SYSTEM tablespace. I know this is not a great discovery and expert people already know about that but as I've started to play around with partitioned table recently, for me it's a new exploration.
Featured Post
The great debacle of healthcare.gov
This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...
Thursday, November 12, 2009
Saturday, November 7, 2009
Migration to Oracle Partitioned Table with very large table
11/12/2009
ORA-01655: unable to extend cluster SYS.C_FILE#_BLOCK# by 128 in tablespace
SYSTEM
I encountered the above error while creating the partitioned tables. In my partitioned table I was creating LOCAL indexes which is actually using the SYSTEM tablespace. So even though I had enough free space on my schema tablespace but eventually it took up the entire SYSTEM tablespace and threw that error.
I tried with partition names and tablespace names mentioned explicitly in the index creation script but still getting the same error. Interesting thing is that in the DBA_IND_PARTITIONS view it shows that the indexes are created inside the expected tablespace but in practical, that's not true. But the GLOBAL indexes works with no issue i.e. taking the default tablespace. I went with GLOBL option as I don't see any advantages with LOCAL index as my system is a OLTP system.
ORA-01655: unable to extend cluster SYS.C_FILE#_BLOCK# by 128 in tablespace
SYSTEM
I encountered the above error while creating the partitioned tables. In my partitioned table I was creating LOCAL indexes which is actually using the SYSTEM tablespace. So even though I had enough free space on my schema tablespace but eventually it took up the entire SYSTEM tablespace and threw that error.
I tried with partition names and tablespace names mentioned explicitly in the index creation script but still getting the same error. Interesting thing is that in the DBA_IND_PARTITIONS view it shows that the indexes are created inside the expected tablespace but in practical, that's not true. But the GLOBAL indexes works with no issue i.e. taking the default tablespace. I went with GLOBL option as I don't see any advantages with LOCAL index as my system is a OLTP system.
Tuesday, November 3, 2009
Tips: Oracle Database Adminstration
Note: you should have the below session privilege to be able to execute the statements mentioned in this blog.
'SELECT ANY DICTIONARY'
1. Monitor the running sql statements
SELECT b.SID "Oracle Sid",
a.spid "Unix PID",
b.serial# "Oracle Serial",
c.sql_text "SQL Text",
b.username "Oracle User",
b.osuser "OS User",
b.machine "Computer",
b.program "Program",
TO_CHAR (logon_time, 'HH24:MI:SS,yyyy/mm/dd') "Login Time",
b.action "Session Action",
c.action "SQL Action",
lockwait "LockWait",
status "Status",
optimizer_cost "Optimizer_cost"
FROM v$process a, v$session b, v$sql c
WHERE a.addr = b.paddr
AND b.sql_address = c.address
AND b.sql_hash_value = c.hash_value;
2. Find the size of tablespace
select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace))
"Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments
group by tablespace_name) tu
where df.tablespace_name = tu.tablespace_name
3. Size of tables
SELECT s.owner,
ref.table_name,
s.segment_type,
CAST(SUM(s.bytes)/1048576 AS NUMBER(17,5)) MB
FROM dba_segments@database s,
(SELECT t.table_name,
t.table_name seg_name,
'TABLE' seg_type
FROM dba_tables@database t
WHERE t.owner = 'schema_name'
UNION
SELECT i.table_name,
i.index_name seg_name,
'INDEX' seg_type
FROM dba_indexes@database i
WHERE i.table_owner = 'schema_name'
) ref
WHERE s.segment_name = ref.seg_name
AND s.segment_type = ref.seg_type
GROUP BY GROUPING SETS((s.owner, ref.table_name, s.segment_type),(ref.table_name))
ORDER BY 1, 2, 3 DESC
Size of individual database objects (e.g. Indexes,Tables etc.)
4. Associated tablespace name of tables
select table_name, tablespace_name
from user_tables
where table_name = 'table_name'
5. Default tablespace name of the schema
select username, default_tablespace
from dba_users
where username = 'schema_name'
6. All locked objects:
7. Frequently used DBA views:
DBA_TABLESPACES - To view the configuration of the tablespces in the database. e.g. name, size, free and used space, etc.
DBA_DATA_FILES - Data files used by the tablespaces and current statistics
DBA_SEGEMENTS - Holds the Segments information. e.g. Segment name, tablespace name, used and free bytes, etc.
DBA_INDEXES - Information of all indexes (except partitioned indexes) e.g. Index name, tablespace name, etc.
DBA_PART_INDEXES - Holds the information of the partitioned indexes. e.g. Index name, partition type, index partition type etc.
DBA_IND_PARTITIONS - Holds the partitioned index detail (including the tablespace name)
DBA_EXTENTS - Contain information on database extents for each database objects i.e. tables, indexes etc.
DBA_CONSTRAINTS - Holds the definition and detail of constraints created in the system. uses example: find the table to which foreign keys referencing to
V$PARAMETER - All database parameters e.g. recyclebin; sga_max_size
V$NLS_PARAMETERS - All NLS parameters like NLS_LANGUAGE; NLS_CHARACTERSET etc.
8. Kill an Oracle session
search for the session you want to kill: select s.sid, s.serial#, s.username from v$session
kill the session: alter system kill session 'sid, serial#' -- in 11g, use 'sid,serial#,@inst_id'
Resources:
9. Setting ORACLE_HOME in Solaris
ORACLE_HOME=
export ORACLE_HOME
echo $ORACLE_HOME
10. to run Operating System/Shell command from sqlplus console use host keyword
e.g. host set ORACLE_HOME to check oracle home directory on MS Windows
11. Add space to tabalespace
To existing datafile:
ALTER DATABASE
DATAFILE ‘[../data. dbf]’
RESIZE [size in M];
Add new datafile:
ALTER TABLESPACE [tablespace_name]
ADD DATAFILE ‘[../data. dbf]’
SIZE [Size in M];
12. How to get the version of Oracle database installed -
i. SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
ii. check SQLNET.LOG file
# Frequently used User management commands:
i. alter user [user_name] account lock/unlock
ii. select password from dba_users where user_name = [user_name]
iii. alter user [user_name] identified by [newpass]
iv. select_any_dictionary privilege is required to be able to view dba views
http://www.experts-exchange.com/Database/Oracle/Q_23065829.html - Killing a running statement
http://www.benh.org/techblog/2006/07/finding-locked-objects-in-oracle/ - locked session
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/start001.htm - starting/stopping Oracle database 11g
http://mhabib.wordpress.com/2007/03/31/increase-the-size-of-tablespace/ - increase tablespace size
'SELECT ANY DICTIONARY'
1. Monitor the running sql statements
SELECT b.SID "Oracle Sid",
a.spid "Unix PID",
b.serial# "Oracle Serial",
c.sql_text "SQL Text",
b.username "Oracle User",
b.osuser "OS User",
b.machine "Computer",
b.program "Program",
TO_CHAR (logon_time, 'HH24:MI:SS,yyyy/mm/dd') "Login Time",
b.action "Session Action",
c.action "SQL Action",
lockwait "LockWait",
status "Status",
optimizer_cost "Optimizer_cost"
FROM v$process a, v$session b, v$sql c
WHERE a.addr = b.paddr
AND b.sql_address = c.address
AND b.sql_hash_value = c.hash_value;
2. Find the size of tablespace
select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace))
"Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments
group by tablespace_name) tu
where df.tablespace_name = tu.tablespace_name
3. Size of tables
SELECT s.owner,
ref.table_name,
s.segment_type,
CAST(SUM(s.bytes)/1048576 AS NUMBER(17,5)) MB
FROM dba_segments@database s,
(SELECT t.table_name,
t.table_name seg_name,
'TABLE' seg_type
FROM dba_tables@database t
WHERE t.owner = 'schema_name'
UNION
SELECT i.table_name,
i.index_name seg_name,
'INDEX' seg_type
FROM dba_indexes@database i
WHERE i.table_owner = 'schema_name'
) ref
WHERE s.segment_name = ref.seg_name
AND s.segment_type = ref.seg_type
GROUP BY GROUPING SETS((s.owner, ref.table_name, s.segment_type),(ref.table_name))
ORDER BY 1, 2, 3 DESC
Size of individual database objects (e.g. Indexes,Tables etc.)
SELECT sum(bytes)/1048576 Megs, segment_name FROM user_extents WHERE segment_name = '' GROUP BY segment_name
4. Associated tablespace name of tables
select table_name, tablespace_name
from user_tables
where table_name = 'table_name'
5. Default tablespace name of the schema
select username, default_tablespace
from dba_users
where username = 'schema_name'
6. All locked objects:
SELECT
c.owner, c.object_name, c.object_type, b.SID, b.serial#, b.status,
b.osuser, b.machine
FROM
v$locked_object a, v$session b, dba_objects c
WHERE
b.SID = a.session_id
AND
a.object_id = c.object_id;
7. Frequently used DBA views:
DBA_TABLESPACES - To view the configuration of the tablespces in the database. e.g. name, size, free and used space, etc.
DBA_DATA_FILES - Data files used by the tablespaces and current statistics
DBA_SEGEMENTS - Holds the Segments information. e.g. Segment name, tablespace name, used and free bytes, etc.
DBA_INDEXES - Information of all indexes (except partitioned indexes) e.g. Index name, tablespace name, etc.
DBA_PART_INDEXES - Holds the information of the partitioned indexes. e.g. Index name, partition type, index partition type etc.
DBA_IND_PARTITIONS - Holds the partitioned index detail (including the tablespace name)
DBA_EXTENTS - Contain information on database extents for each database objects i.e. tables, indexes etc.
DBA_CONSTRAINTS - Holds the definition and detail of constraints created in the system. uses example: find the table to which foreign keys referencing to
V$PARAMETER - All database parameters e.g. recyclebin; sga_max_size
V$NLS_PARAMETERS - All NLS parameters like NLS_LANGUAGE; NLS_CHARACTERSET etc.
8. Kill an Oracle session
search for the session you want to kill: select s.sid, s.serial#, s.username from v$session
kill the session: alter system kill session 'sid, serial#' -- in 11g, use 'sid,serial#,@inst_id'
Resources:
9. Setting ORACLE_HOME in Solaris
ORACLE_HOME=
export ORACLE_HOME
echo $ORACLE_HOME
10. to run Operating System/Shell command from sqlplus console use host keyword
e.g. host set ORACLE_HOME to check oracle home directory on MS Windows
11. Add space to tabalespace
To existing datafile:
ALTER DATABASE
DATAFILE ‘[../data. dbf]’
RESIZE [size in M];
Add new datafile:
ALTER TABLESPACE [tablespace_name]
ADD DATAFILE ‘[../data. dbf]’
SIZE [Size in M];
12. How to get the version of Oracle database installed -
i. SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
# Frequently used User management commands:
i. alter user [user_name] account lock/unlock
ii. select password from dba_users where user_name = [user_name]
iii. alter user [user_name] identified by [newpass]
iv. select_any_dictionary privilege is required to be able to view dba views
http://www.experts-exchange.com/Database/Oracle/Q_23065829.html - Killing a running statement
http://www.benh.org/techblog/2006/07/finding-locked-objects-in-oracle/ - locked session
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/start001.htm - starting/stopping Oracle database 11g
http://mhabib.wordpress.com/2007/03/31/increase-the-size-of-tablespace/ - increase tablespace size
Monday, October 19, 2009
Computing confidence level of a software project or iteration
Usually once a Project or an Iteration or a Release finishes up, the corresponding team sits in a review meeting (there are so many names of this kind of meeting e.g. Lesson Learned Meeting, Retrospective Meeting etc.) and tries to identify the mistakes the team had made (we don't like to hear that we've done any mistakes as a natural human tendency, that's why people name it as "Lesson learned"), what are the remedy so that the same mistake doesn't occur in next cycle (but the lessons of history is "Nobody learns from history"). My objective of this blog is not to dig down on those issues but to answer a simple question asked at the beginning or end of the meeting (usually by the manager) and that is "How we've done in this last release?"
There are so many ways of answering this question. Like:
- "we've done a great job. The number of defect has gone down greatly ...
- "It was good. The developers had written comparatively more codes in this release (along with LOC, if available)"
- "The quality of business requirement was better; the developer didn't have much complain about requirement issues in this release".
- And this can goes on.
But shouldn't we have a straight forward way of evaluating the success or failure or comparing with other releases of projects which would be in unambiguous way and most of all, measurable. After all, we develop software systems which works in a predictive path (unless it's AI or robotics project) and can be measured by numbers.
I would rather expect the answer in below format instead of the answers mentioned earlier:
- "The project was great! The Project Confidence Level (PCL) was 4.5 in this release", or
- "The release went somewhat good. The PCL was 2.3, we've room to improve in the field ...", or
- "The PCL was 1.9. It didn't meet our expected goal. Lets talk about what we had done wrong ..." etc.
And through this unambiguous number we can even set a target for the next release that would be easy to communicate to the project team members.
So now, lets talk about what could be a Project Confidence Level (PCL). By PCL we'd be able to associate a number to a development release or a project using the available historical facts gathered for the project or similar projects. Ther are various kinds of facts in a software development project that can be used to compute the PCL. The number of facts would vary depending on the availability of information in a project. The higher number of facts you include in your formula, the more effective your PCL calculation would be. Below are the sample of some facts.
Average LOC/feature
Average LOC/developer in a Development day
Average LOC/Defect
Average LOC/Critical Defect
Average LOC/Medium Defect
Average LOC/Simple Defect
Average Defect/Feature
Average Critical Defect/Feature
Average Medium Defect/Feature
Average Simple Defect/Feature
Average Development Day/feature
Average project Day/Feature
Average Work days/release
Average LOC/release
Average LOC/Developer in a release
Average Feature/Release
Average Defect/Release
Each above mentioned facts would be given it's weight/impact on the development effort and the total would be added up to 100. Consider the each average facts has confidence level as 1. So the average confidence level of the project would be 1 when the following formula is used:
Average PCL = {summation of (each fact * weight)} divided by 100.
And after each release, the above facts would be counted for that release and through a mathematical formula the PCL can be determined. The PCL lower than 1 is unacceptable, > 1 is expected and the higher the PCL is the greater the achievement of the team.
There are so many ways of answering this question. Like:
- "we've done a great job. The number of defect has gone down greatly ...
- "It was good. The developers had written comparatively more codes in this release (along with LOC, if available)"
- "The quality of business requirement was better; the developer didn't have much complain about requirement issues in this release".
- And this can goes on.
But shouldn't we have a straight forward way of evaluating the success or failure or comparing with other releases of projects which would be in unambiguous way and most of all, measurable. After all, we develop software systems which works in a predictive path (unless it's AI or robotics project) and can be measured by numbers.
I would rather expect the answer in below format instead of the answers mentioned earlier:
- "The project was great! The Project Confidence Level (PCL) was 4.5 in this release", or
- "The release went somewhat good. The PCL was 2.3, we've room to improve in the field ...", or
- "The PCL was 1.9. It didn't meet our expected goal. Lets talk about what we had done wrong ..." etc.
And through this unambiguous number we can even set a target for the next release that would be easy to communicate to the project team members.
So now, lets talk about what could be a Project Confidence Level (PCL). By PCL we'd be able to associate a number to a development release or a project using the available historical facts gathered for the project or similar projects. Ther are various kinds of facts in a software development project that can be used to compute the PCL. The number of facts would vary depending on the availability of information in a project. The higher number of facts you include in your formula, the more effective your PCL calculation would be. Below are the sample of some facts.
Average LOC/feature
Average LOC/developer in a Development day
Average LOC/Defect
Average LOC/Critical Defect
Average LOC/Medium Defect
Average LOC/Simple Defect
Average Defect/Feature
Average Critical Defect/Feature
Average Medium Defect/Feature
Average Simple Defect/Feature
Average Development Day/feature
Average project Day/Feature
Average Work days/release
Average LOC/release
Average LOC/Developer in a release
Average Feature/Release
Average Defect/Release
Each above mentioned facts would be given it's weight/impact on the development effort and the total would be added up to 100. Consider the each average facts has confidence level as 1. So the average confidence level of the project would be 1 when the following formula is used:
Average PCL = {summation of (each fact * weight)} divided by 100.
And after each release, the above facts would be counted for that release and through a mathematical formula the PCL can be determined. The PCL lower than 1 is unacceptable, > 1 is expected and the higher the PCL is the greater the achievement of the team.
Thursday, October 15, 2009
Oracle error: ORA-00600: internal error code, arguments: [15264], [], [], [], [], [], [], []
I was playing with the partitioned table in Oracle 10g. At one instance, I issued a drop table statement to drop a partitioned table but in the middle of the run I changed my mind and stopped the statement execution. Then I tried to re run the statement but got stuck with the oracle error:
ORA-00600: internal error code, arguments: [15264], [], [], [], [], [], [], []
After googling the error, I found some suggestion to purge the recycle bin and bounce the database server to get rid of the error.
SQL> purge recyclebin;
SQL> shutdown immediate;
SQL> startup
In my case, I tried to drop the table after couple of days without any database restart and successfully able to drop that table. So it seems that Oracle internally has managed to fix it in the course of time even without a restart. Anyway, the immediate action would be to restart as usually you won't have leverage to wait days to do the table drop.
ORA-00600: internal error code, arguments: [15264], [], [], [], [], [], [], []
After googling the error, I found some suggestion to purge the recycle bin and bounce the database server to get rid of the error.
SQL> purge recyclebin;
SQL> shutdown immediate;
SQL> startup
In my case, I tried to drop the table after couple of days without any database restart and successfully able to drop that table. So it seems that Oracle internally has managed to fix it in the course of time even without a restart. Anyway, the immediate action would be to restart as usually you won't have leverage to wait days to do the table drop.
Wednesday, October 14, 2009
Table Partitioning in Oracle
What is table partitioning?
Table partitioning in Oracle is a feature given by the database system to store and manage a table's record in physically partitioned spaces in the storage but transparent to all who are accessing the table.
There are different kind of strategies to partition a table:
- Range partition: partition the table's record in a range, for example, of number or Date
- List partition: this strategy is used when it is required to organized table's record using some values e.g. State, City, Country etc.
- Hash partition: a hash function is used to distribute the records across the partitions.
The partition can have sub partitions as well. In Oracle 10g, the valid sub partitions type within partition are:
- Range partition ->List sub partition
- Range partition -> Hash sub partition
The benefit of using partitions are:
- The access to table's record are limited to partition so search would be much faster
- Partitions can be independently backed up and restored without hampering the table's availability
- Partitions can contain the indexes within itself (Local index)
- Partitions data can be moved to a table with a single command (Exchange partition)
1. Create a partitioned table
CREATE TABLE PARTITIONED_TABLE
( ID INTEGER NOT NULL,
AS_OF_DATE DATE NOT NULL,
BALANCE NUMBER(25,7),
DEBIT NUMBER(20,5),
CREDIT NUMBER(20,5)
)
PARTITION BY RANGE (AS_OF_DATE)
(
PARTITION YEAR_JAN_2006 VALUES LESS THAN (TO_DATE('01-FEB-2006', 'DD-MON-YYYY')),
PARTITION YEAR_FEB_2006 VALUES LESS THAN (TO_DATE('01-MAR-2006', 'DD-MON-YYYY')),
PARTITION YEAR_MAR_2006 VALUES LESS THAN (TO_DATE('01-APR-2006', 'DD-MON-YYYY'))
)
2. Populate data into the partitioned table, if available
insert into partitioned_table (id, as_of_date, balance, credit, debit)
(select id, as_of_date, balance, credit, debit from regular_table);
insert into bsl_rdr.rdr_erec_partitioned (id, as_of_date, balance, credit, debit)
values (1, '30-JAN-2006', 100.0, 20.0, 100);
Commit;
3. Analyze the table to gather statistics data
analyze table partitioned_table compute statistics
analyze table partitioned_table PARTITION(YEAR_JAN_2006) compute statistics;
4. Check the statistics of the partitioned table
select t.table_name, t.partition_name, t.num_rows, t.high_value from user_tab_partitions t where t.table_name = 'PARTITIONED_TABLE'
5. Exchange partition's data into a new table
ALTER TABLE PARTITIONED_TABLE
EXCHANGE PARTITION YEAR_JAN_2006 WITH TABLE
6. Split a partition into two
ALTER TABLE PARTITIONED_TABLE
SPLIT PARTITION YEAR_JAN_2006 AT (TO_DATE('01-Jan-2006', 'DD-MON-YYYY'))
INTO(
Partition YEAR_DEC_2005,
Partition YEAR_JAN_2006
);
Note: Be careful about the sequence when a partition needs to be splitted to actually create a new partition with the lowest range value, you might want to keep the original partition, for example, to hold, for example, 1 month records and the new partition would hold the records below that date. Keep the new partition name first in the INTO(....) part and then keep the original partition name to achieve that objective. In above example statement, I wanted to keep YEAR_JAN_2006 partition to hold the January, 2006 records and YEAR_DEC_2005 would contain records from December, 2005 and backwards.
7. Add a new partition
alter table PARTITIONED_TABLE
add partition YEAR_MAR_2010 VALUES LESS THAN (TO_DATE('01-APR-2010', 'DD-MON-YYYY'))
8. Renaming a partition
alter table PARTITIONED_TABLE rename partition YEAR_MAR_2006 to YEAR_FEB_2006
9. Drop a partition
alter table partitioned_table
drop partition YEAR_JAN_2006_1
Resources:
http://www.dbazine.com/oracle/or-articles/nanda6 - Methods to migrate from regular table to partitioned table
Table partitioning in Oracle is a feature given by the database system to store and manage a table's record in physically partitioned spaces in the storage but transparent to all who are accessing the table.
There are different kind of strategies to partition a table:
- Range partition: partition the table's record in a range, for example, of number or Date
- List partition: this strategy is used when it is required to organized table's record using some values e.g. State, City, Country etc.
- Hash partition: a hash function is used to distribute the records across the partitions.
The partition can have sub partitions as well. In Oracle 10g, the valid sub partitions type within partition are:
- Range partition ->List sub partition
- Range partition -> Hash sub partition
The benefit of using partitions are:
- The access to table's record are limited to partition so search would be much faster
- Partitions can be independently backed up and restored without hampering the table's availability
- Partitions can contain the indexes within itself (Local index)
- Partitions data can be moved to a table with a single command (Exchange partition)
1. Create a partitioned table
CREATE TABLE PARTITIONED_TABLE
( ID INTEGER NOT NULL,
AS_OF_DATE DATE NOT NULL,
BALANCE NUMBER(25,7),
DEBIT NUMBER(20,5),
CREDIT NUMBER(20,5)
)
PARTITION BY RANGE (AS_OF_DATE)
(
PARTITION YEAR_JAN_2006 VALUES LESS THAN (TO_DATE('01-FEB-2006', 'DD-MON-YYYY')),
PARTITION YEAR_FEB_2006 VALUES LESS THAN (TO_DATE('01-MAR-2006', 'DD-MON-YYYY')),
PARTITION YEAR_MAR_2006 VALUES LESS THAN (TO_DATE('01-APR-2006', 'DD-MON-YYYY'))
)
2. Populate data into the partitioned table, if available
insert into partitioned_table (id, as_of_date, balance, credit, debit)
(select id, as_of_date, balance, credit, debit from regular_table);
insert into bsl_rdr.rdr_erec_partitioned (id, as_of_date, balance, credit, debit)
values (1, '30-JAN-2006', 100.0, 20.0, 100);
Commit;
3. Analyze the table to gather statistics data
analyze table partitioned_table compute statistics
analyze table partitioned_table PARTITION(YEAR_JAN_2006) compute statistics;
4. Check the statistics of the partitioned table
select t.table_name, t.partition_name, t.num_rows, t.high_value from user_tab_partitions t where t.table_name = 'PARTITIONED_TABLE'
5. Exchange partition's data into a new table
ALTER TABLE PARTITIONED_TABLE
EXCHANGE PARTITION YEAR_JAN_2006 WITH TABLE
6. Split a partition into two
ALTER TABLE PARTITIONED_TABLE
SPLIT PARTITION YEAR_JAN_2006 AT (TO_DATE('01-Jan-2006', 'DD-MON-YYYY'))
INTO(
Partition YEAR_DEC_2005,
Partition YEAR_JAN_2006
);
Note: Be careful about the sequence when a partition needs to be splitted to actually create a new partition with the lowest range value, you might want to keep the original partition, for example, to hold, for example, 1 month records and the new partition would hold the records below that date. Keep the new partition name first in the INTO(....) part and then keep the original partition name to achieve that objective. In above example statement, I wanted to keep YEAR_JAN_2006 partition to hold the January, 2006 records and YEAR_DEC_2005 would contain records from December, 2005 and backwards.
7. Add a new partition
alter table PARTITIONED_TABLE
add partition YEAR_MAR_2010 VALUES LESS THAN (TO_DATE('01-APR-2010', 'DD-MON-YYYY'))
8. Renaming a partition
alter table PARTITIONED_TABLE rename partition YEAR_MAR_2006 to YEAR_FEB_2006
9. Drop a partition
alter table partitioned_table
drop partition YEAR_JAN_2006_1
Resources:
http://www.dbazine.com/oracle/or-articles/nanda6 - Methods to migrate from regular table to partitioned table
Friday, October 9, 2009
Data Dictionary: a thought
Data Dictionary (DD) is usually created by the development lead or the database analyst during or after the creation of the Data Model. The most popular format of a DD is using a spreadsheet or in rare case text document. Some people also uses the database's Data Dictionary infrastructure (Oracle has a Data Dictionary infrastructure where you can enter Definitions for Tables and Columns as comments in a system table. The second most important thing after the creation of the definition is the change log. Most of the time I've seen this has been ignored or given less importance in a project. But to me this is a single most important thing the Data Model designer should emphasize apart from the Data Dictionary definitions. Because even though we think of a initial Data Model as invincible for next application changes but, in practical life, it is not.
I've seen and used another spreadsheet to maintain the change logs which is asynchronous and not integrated with the Data Model designing activities. Now the question is how we can enforce the change logs with the Data Model activities. Here is a proposed solution:
We can have a Data Model for the Data Dictionary. Below is a pseudo script of the DDL
TABLE DD_TABLE (
TABLE_NAME -- may be prefixed with schema to uniquely identify
TABLE_DEFINITION
TABLE_STATUS -- Active, Inactive, Decommissioned, Future_Use etc.
CREATE_COMMENT
CREATED_BY
CREATED_DATE
LAST_UPDATED_BY
LAST_UPDATED_DATE
)
TABLE DD_TABLE_HISTORY (
TABLE_NAME (FK from DD_TABLE)
OLD_VALUE
NEW_VALUE -- redundant information for reporting purpose
CHANGE_COMMENT
CHANGED_BY
CHANGED_DATE
)
TABLE DD_COLUMN (
TABLE_NAME (FK of DATA_DICTIONARY_TABLE)
COLUMN_NAME
TABLE_STATUS -- Active, Inactive, Decommissioned, Future_Use etc.
CREATED_BY
CRATED_DATE
LAST_UPDATED_BY
LAST_UPDATED_DATE
LAST_UPDATE
)
TABLE DD_COLUMN_HISTORY (
TABLE_NAME (FK from DD_TABLE)
COLUMN_NAME (FK from DD_COLUMN)
OLD_VALUE
NEW_VALUE -- redundant information for reporting purpose
CHANGE_COMMENT
CHANGED_BY
CHANGED_DATE
)
Triggers can be used to enforce to have default log entry into the history tables unless otherwise entered by the Data Model designer. What do you think about it?
I've seen and used another spreadsheet to maintain the change logs which is asynchronous and not integrated with the Data Model designing activities. Now the question is how we can enforce the change logs with the Data Model activities. Here is a proposed solution:
We can have a Data Model for the Data Dictionary. Below is a pseudo script of the DDL
TABLE DD_TABLE (
TABLE_NAME -- may be prefixed with schema to uniquely identify
TABLE_DEFINITION
TABLE_STATUS -- Active, Inactive, Decommissioned, Future_Use etc.
CREATE_COMMENT
CREATED_BY
CREATED_DATE
LAST_UPDATED_BY
LAST_UPDATED_DATE
)
TABLE DD_TABLE_HISTORY (
TABLE_NAME (FK from DD_TABLE)
OLD_VALUE
NEW_VALUE -- redundant information for reporting purpose
CHANGE_COMMENT
CHANGED_BY
CHANGED_DATE
)
TABLE DD_COLUMN (
TABLE_NAME (FK of DATA_DICTIONARY_TABLE)
COLUMN_NAME
TABLE_STATUS -- Active, Inactive, Decommissioned, Future_Use etc.
CREATED_BY
CRATED_DATE
LAST_UPDATED_BY
LAST_UPDATED_DATE
LAST_UPDATE
)
TABLE DD_COLUMN_HISTORY (
TABLE_NAME (FK from DD_TABLE)
COLUMN_NAME (FK from DD_COLUMN)
OLD_VALUE
NEW_VALUE -- redundant information for reporting purpose
CHANGE_COMMENT
CHANGED_BY
CHANGED_DATE
)
Triggers can be used to enforce to have default log entry into the history tables unless otherwise entered by the Data Model designer. What do you think about it?
Wednesday, September 16, 2009
A Dictionary of unusual Java Exceptions
Exception # 1:
The transaction service encountered an error on an xa_recover operation. The resource was com.ibm.ws.rsadapter.spi.WSRdbXaResourceImpl@510051. The error code was XAER_RMERR. The exception stack trace follows: javax.transaction.xa.XAException
Description:
The XA transaction manager tries to recover for an early server crash either in application server or database server side or could be both but the user id that has been used to connect to the database doesn't have the permission to access certain tables.
Solution:
provide access to the BA_PENDING_TRANSACTIONS table for the particular database user in question
Exception # 2:
Exception in thread "CompilerThread0" java.lang.OutOfMemoryError: requested 32776 bytes for Chunk::new. Out of swap space?
Description:
Websphere Application Server 6.1.17 goes OutOfMemory with the above error trace on the console
Solution:
This is bug in JVM JIT compilation and the fix is not confirmed yet. The work around is to use below configuration:
http://www.ibm.com/developerworks/websphere/library/techarticles/0407_woolf/0407_woolf.html- Distributed transaction in WAS (Exception # 1)
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4916142 - OutOfMemory Error (Exception # 2)
The transaction service encountered an error on an xa_recover operation. The resource was com.ibm.ws.rsadapter.spi.WSRdbXaResourceImpl@510051. The error code was XAER_RMERR. The exception stack trace follows: javax.transaction.xa.XAException
Description:
The XA transaction manager tries to recover for an early server crash either in application server or database server side or could be both but the user id that has been used to connect to the database doesn't have the permission to access certain tables.
Solution:
provide access to the BA_PENDING_TRANSACTIONS table for the particular database user in question
Exception # 2:
Exception in thread "CompilerThread0" java.lang.OutOfMemoryError: requested 32776 bytes for Chunk::new. Out of swap space?
Description:
Websphere Application Server 6.1.17 goes OutOfMemory with the above error trace on the console
Solution:
This is bug in JVM JIT compilation and the fix is not confirmed yet. The work around is to use below configuration:
-XX:+UseDefaultStackSize -Xss256KResources:
http://www.ibm.com/developerworks/websphere/library/techarticles/0407_woolf/0407_woolf.html- Distributed transaction in WAS (Exception # 1)
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4916142 - OutOfMemory Error (Exception # 2)
Wednesday, May 13, 2009
Clear Case concepts
#1 Clear Case merge
Merge on clear case Diff Merge tools might look confusing at times. Here are 2 scenarios of merging on Diff Merge.
Rebase:
1st window is the base window which contains the version of the file from which the change was made. The changes, made by other and delivered to integration stream, are shown on the 2nd window. And the 3rd window shows the Development stream to which you'd rebase.
Deliver:
1st window is the base window which contains the version of the file from where you've started work on the file and made necessary changes. 2dn window shows the changes now you've in your development stream. And the 3rd window shows Integration stream to which you're going to deliver your changes.
Resources:
http://clearcase.weintraubworld.net/ - not-so-official clear case website
http://clearcase.weintraubworld.net/cc.branch.html#real - Branching theory of clear case
http://clearcase.weintraubworld.net/cc.branch2.html - Branching strategy
Merge on clear case Diff Merge tools might look confusing at times. Here are 2 scenarios of merging on Diff Merge.
Rebase:
1st window is the base window which contains the version of the file from which the change was made. The changes, made by other and delivered to integration stream, are shown on the 2nd window. And the 3rd window shows the Development stream to which you'd rebase.
Deliver:
1st window is the base window which contains the version of the file from where you've started work on the file and made necessary changes. 2dn window shows the changes now you've in your development stream. And the 3rd window shows Integration stream to which you're going to deliver your changes.
Resources:
http://clearcase.weintraubworld.net/ - not-so-official clear case website
http://clearcase.weintraubworld.net/cc.branch.html#real - Branching theory of clear case
http://clearcase.weintraubworld.net/cc.branch2.html - Branching strategy
Friday, May 8, 2009
Useful ClearCase commands
Frequently used cleartool commands:
List out the checked out files
cleartool> lsco
Check in a file
cleartool>ci [-nc] filename
Check out a file
cleartool>co filename
Undo a check out
cleartool>unco filename
Add a file to source control
cleartool>mkelem filename
Resources:
http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?cmd=getdoc&coll=0530&db=man&fname=1%20cleartool - Contains list of commands
http://www.ibm.com/developerworks/rational/library/836.html - cheat sheet
List out the checked out files
cleartool> lsco
Check in a file
cleartool>ci [-nc] filename
Check out a file
cleartool>co filename
Undo a check out
cleartool>unco filename
Add a file to source control
cleartool>mkelem filename
Resources:
http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?cmd=getdoc&coll=0530&db=man&fname=1%20cleartool - Contains list of commands
http://www.ibm.com/developerworks/rational/library/836.html - cheat sheet
Thursday, April 30, 2009
Oracle Table Join
There are mainly 3 types of Joins in Oracle - Cartesian Join or Cross Join, Inner Join and Outer Join
Cartesian Join: This join returns the cartesian product of two tables i.e. match all the rows of one table to every rows of another table
Inner Join: when two tables are joined to retrieve all the matched rows in two tables
Outer Join: There are different flavors of Outer Joins: Full Outer Join, Left Outer Join, Right Outer Join. They're described with exmple illustration below:
Full outer join
Takes all matched rows from both table and then all the unmatched rows from left as well as right table
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one full outer join two on one.id = two.id
Left outer join
Takes all the matches as well as the unmatched rows from left table (i.e. table one)
Deprecated syntax:
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one, two
where one.id = two.id(+)
Suggested syntax:
from table1 left outer join table2
on table1.col = table2.col
Right outer join
Takes all the rows that have match as well as the all the left over rows from the right table (i.e. table two)
Deprecated syntax
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one, two
where one.id(+) = two.id
Suggested syntax
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one right outer join two on one.id = two.id
There is another flavors of Join which is known as Self Join where both the tables are actually the same table but used with alias. All the 3 kinds of Joins mentioned above can be used for the self join
Resources
http://www.praetoriate.com/oracle_tips_outer_joins.htm - Donald K. Burleson on Joins. He writes very straight & simple.
http://www.sqlteam.com/article/additional-criteria-in-the-join-clause - Filtering the join
Cartesian Join: This join returns the cartesian product of two tables i.e. match all the rows of one table to every rows of another table
Inner Join: when two tables are joined to retrieve all the matched rows in two tables
Outer Join: There are different flavors of Outer Joins: Full Outer Join, Left Outer Join, Right Outer Join. They're described with exmple illustration below:
Full outer join
Takes all matched rows from both table and then all the unmatched rows from left as well as right table
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one full outer join two on one.id = two.id
Left outer join
Takes all the matches as well as the unmatched rows from left table (i.e. table one)
Deprecated syntax:
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one, two
where one.id = two.id(+)
Suggested syntax:
from table1 left outer join table2
on table1.col = table2.col
Right outer join
Takes all the rows that have match as well as the all the left over rows from the right table (i.e. table two)
Deprecated syntax
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one, two
where one.id(+) = two.id
Suggested syntax
select one.id as one_id, one.name as one_name , two.id as two_id, two.address as two_address
from one right outer join two on one.id = two.id
There is another flavors of Join which is known as Self Join where both the tables are actually the same table but used with alias. All the 3 kinds of Joins mentioned above can be used for the self join
Resources
http://www.praetoriate.com/oracle_tips_outer_joins.htm - Donald K. Burleson on Joins. He writes very straight & simple.
http://www.sqlteam.com/article/additional-criteria-in-the-join-clause - Filtering the join
Thursday, March 19, 2009
Rational Team Concert (RTC): startup
The cool features of RTC are:
1. Work schedule that shows your work in different bucket: Today, Tomorrow, Next Week, Future and Not Planned. Also it is highly customizable
2. Moving around with the changed files to different 'Change Set' is another cool feature. It might be a feature that is very basic but people who are trapped into Rational UCM, feels the difference
3. Suspending a change set and then resume it later sometime. Awesome!!!
4. Incoming change set bucket shows all the change set that are not yet accepted to my Repository Workspace. Now I know what are the items pending to make my workspace up to date.
1. Work schedule that shows your work in different bucket: Today, Tomorrow, Next Week, Future and Not Planned. Also it is highly customizable
2. Moving around with the changed files to different 'Change Set' is another cool feature. It might be a feature that is very basic but people who are trapped into Rational UCM, feels the difference
3. Suspending a change set and then resume it later sometime. Awesome!!!
4. Incoming change set bucket shows all the change set that are not yet accepted to my Repository Workspace. Now I know what are the items pending to make my workspace up to date.
Monday, March 9, 2009
Setup WAS 6.1 Test Environment
Issue#1:
When the Automatic publishing feature is enabled in the server, the same application loads multiple times and some time the server goes out of memory and crashes. Also after doing some changes in my code when I try to re publish, the server goes out of memory. Couldn't identify the reason of this strange behavior but my guess is some how the application server doesn't clean up the references to the application before it redeploys the changes. So came out with a work around though irritating and takes little more time - I manually uninstall the application (through admin console) and then do the publishing of the application.
When the Automatic publishing feature is enabled in the server, the same application loads multiple times and some time the server goes out of memory and crashes. Also after doing some changes in my code when I try to re publish, the server goes out of memory. Couldn't identify the reason of this strange behavior but my guess is some how the application server doesn't clean up the references to the application before it redeploys the changes. So came out with a work around though irritating and takes little more time - I manually uninstall the application (through admin console) and then do the publishing of the application.
Monday, February 23, 2009
Bulk update in Oracle
When you've millions of records in your system and you want to update rows in a table, it requires to take extra care. There are two options while bulk update
* Update in a single shot - It is faster and easy to do it. But it becomes a high risk when the update statement modifies millions of records (like 10 millions). You'd be stuck with a single update, you'd not be able to see the progress and most of all, your redo log can overflow and other system accessing the database would be heavily slowed down. Your database might even crash in worst case scenario. Below is a sample update query with this approach:
UPDATE [table]
SET [column_name] = 'some value'
WHERE [some clause]
* Update in chunk - It is a slow process but you can overcome the limitations and risks that one shot update might face. The only down side with this approach is it is really a slow process. it could take 10 or 20 times comparing with the first approach. Below is a sample update query with this approach:
SET SERVEROUTPUT ON SIZE 1000000
BEGIN
LOOP
UPDATE [table]
SET [column_name] = 'some value'
WHERE [some clause]
AND ROWNUM <= [how_many_at_a_time];
DBMS_OUTPUT.PUT_LINE('Done xxxx deletion!');
EXIT WHEN SQL%NOTFOUND;
COMMIT;
END LOOP;
END;
Resources:
http://oracle-online-help.blogspot.com/2006/12/delete-in-batches.html - delete in batch
* Update in a single shot - It is faster and easy to do it. But it becomes a high risk when the update statement modifies millions of records (like 10 millions). You'd be stuck with a single update, you'd not be able to see the progress and most of all, your redo log can overflow and other system accessing the database would be heavily slowed down. Your database might even crash in worst case scenario. Below is a sample update query with this approach:
UPDATE [table]
SET [column_name] = 'some value'
WHERE [some clause]
* Update in chunk - It is a slow process but you can overcome the limitations and risks that one shot update might face. The only down side with this approach is it is really a slow process. it could take 10 or 20 times comparing with the first approach. Below is a sample update query with this approach:
SET SERVEROUTPUT ON SIZE 1000000
BEGIN
LOOP
UPDATE [table]
SET [column_name] = 'some value'
WHERE [some clause]
AND ROWNUM <= [how_many_at_a_time];
DBMS_OUTPUT.PUT_LINE('Done xxxx deletion!');
EXIT WHEN SQL%NOTFOUND;
COMMIT;
END LOOP;
END;
Resources:
http://oracle-online-help.blogspot.com/2006/12/delete-in-batches.html - delete in batch
Tuesday, February 17, 2009
Few interesting concepts deep inside the Java Virtual Machine (JVM)
Here are few interesting concepts that are implemented in the JVM:
1. Encapsulation has two modes. Static and Dynamic. Static encapsulation is the regular encapsulation that enforces of accessing attributes of a java class e.g. access modifier provides capability of visibility whereas Runtime Encapsulation makes it sure the attributes are accessible by a class during runtime inside JVM. This Runtime Encapsulation is enforced by the classloader i.e. package friendly access modifier woudn't be honored if both of the classes are not loaded by the same classloader.
2. OxCAFEBABE is the firest 4 bytes of a java class file in byte code format
Resources:
http://www.artima.com/insidejvm/ed2/introarch.html - JVM Architecture online book
http://betterexplained.com/articles/understanding-big-and-little-endian-byte-order/
1. Encapsulation has two modes. Static and Dynamic. Static encapsulation is the regular encapsulation that enforces of accessing attributes of a java class e.g. access modifier provides capability of visibility whereas Runtime Encapsulation makes it sure the attributes are accessible by a class during runtime inside JVM. This Runtime Encapsulation is enforced by the classloader i.e. package friendly access modifier woudn't be honored if both of the classes are not loaded by the same classloader.
2. OxCAFEBABE is the firest 4 bytes of a java class file in byte code format
Resources:
http://www.artima.com/insidejvm/ed2/introarch.html - JVM Architecture online book
http://betterexplained.com/articles/understanding-big-and-little-endian-byte-order/
Tuesday, February 10, 2009
Oracle Tips - DDL
1. Add column to a table
ALTER TABLE table_name
ADD column_name data_type constraint;
2. To add multiple columns to a table -
ALTER TABLE table_name
ADD (
column_name data_type constraint,
column_name2 data_type constraint
);
3. Rename column
ALTER TABLE table_name
RENAME COLUMN column_name to column_new_name;
4. Modify column
ALTER TABLE table_name
MODIFY ( column_name data_type constraint);
5. Drop column
ALTER TABLE table_name
DROP COLUMN column_name;
6. Drop multiple columns
ALTER TABLE table_name
DROP (
column_name1, column_name2, ....);
7. Index Management
To see the constraints of a table:
select table_name, constraint_name, constraint_type from user_constraints;
To see the index of a table
select table_name, index_name, column_name from user_ind_columns;
Create index
create [bitmap] index index_name on table_name
(column_name)
Find and Rebuild Index
select * from user_indexes
Alter Index rebuild
Analyze index and table
Analyze index [index-name] [compute/delete/validate] statistics
Analyze table [table-name] [compute/delete/validate] statistics
8. Manage privileges on table
GRANT select, insert, update, delete on SCHEMA.TABLE_NAME to ROLE/USER
9. Create Synonym of table
CREATE OR REPLACE [public] SYNONYM SYNONYM_NAME for SCHEMA.TABLE_NAME
Check the synonyms created by the user
select * from all_synonyms
10. Manage Constraint
To see the constraints of a table:
select table_name, constraint_name, constraint_type from user_constraints;
Drop constraint
alter table table_name
drop constraint constraint_name
Add Foreign Key constraint to a table
Alter table Add Constraint
Foreign Key (column) References
On cascade;
11. Create CHECK constraint on a column
ALTER TABLE TABLE_NAME
ADD CONSTRAINT CHK_Y_N
CHECK (COLUMN_NAME IN ('Y', 'N'))
/
12. Partitioned Table
Resources:
https://www1.columbia.edu/sec/acis/dba/oracle/faq-sql.html
http://www.dba-oracle.com/ - has useful queries
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:969054700346631425 - Bulk move
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:273215737113 - Update columns from other table
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:974699100346868838 - should we use db user or application user?
http://www.adp-gmbh.ch/ora/misc/integrity_constraints.html - Oracle's Integrity Constraints
http://jenniferlinca.wordpress.com/ - Blog contains good Oracle stuffs
http://forums.oracle.com/forums/message.jspa?messageID=2348825 - Fun to read
http://www.dba-oracle.com/art_9i_indexing.htm - Performance tuning with index
http://www.orafaq.com/node/855 - Common errors using Outer Join
http://marist89.blogspot.com/ - Wonderful quote from the site - "The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt." --Bertrand Russell. A good database blog
http://www.orafaq.com/forum/t/54714/0/ - Batch delete
http://www.dba-oracle.com/oracle_news/2004_1_19_using_function_based_indexes%20.htm - Function based index
http://www.oracle-dba-online.com/sql/oracle_table_partition.htm - Partitioned table
ALTER TABLE table_name
ADD column_name data_type constraint;
2. To add multiple columns to a table -
ALTER TABLE table_name
ADD (
column_name data_type constraint,
column_name2 data_type constraint
);
3. Rename column
ALTER TABLE table_name
RENAME COLUMN column_name to column_new_name;
4. Modify column
ALTER TABLE table_name
MODIFY ( column_name data_type constraint);
5. Drop column
ALTER TABLE table_name
DROP COLUMN column_name;
6. Drop multiple columns
ALTER TABLE table_name
DROP (
column_name1, column_name2, ....);
7. Index Management
To see the constraints of a table:
select table_name, constraint_name, constraint_type from user_constraints;
To see the index of a table
select table_name, index_name, column_name from user_ind_columns;
Create index
create [bitmap] index index_name on table_name
(column_name)
Find and Rebuild Index
select * from user_indexes
Alter Index
Analyze index and table
Analyze index [index-name] [compute/delete/validate] statistics
Analyze table [table-name] [compute/delete/validate] statistics
8. Manage privileges on table
GRANT select, insert, update, delete on SCHEMA.TABLE_NAME to ROLE/USER
9. Create Synonym of table
CREATE OR REPLACE [public] SYNONYM SYNONYM_NAME for SCHEMA.TABLE_NAME
Check the synonyms created by the user
select * from all_synonyms
10. Manage Constraint
To see the constraints of a table:
select table_name, constraint_name, constraint_type from user_constraints;
Drop constraint
alter table table_name
drop constraint constraint_name
Add Foreign Key constraint to a table
Alter table
Foreign Key (column) References
On
11. Create CHECK constraint on a column
ALTER TABLE TABLE_NAME
ADD CONSTRAINT CHK_Y_N
CHECK (COLUMN_NAME IN ('Y', 'N'))
/
12. Partitioned Table
Resources:
https://www1.columbia.edu/sec/acis/dba/oracle/faq-sql.html
http://www.dba-oracle.com/ - has useful queries
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:969054700346631425 - Bulk move
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:273215737113 - Update columns from other table
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:974699100346868838 - should we use db user or application user?
http://www.adp-gmbh.ch/ora/misc/integrity_constraints.html - Oracle's Integrity Constraints
http://jenniferlinca.wordpress.com/ - Blog contains good Oracle stuffs
http://forums.oracle.com/forums/message.jspa?messageID=2348825 - Fun to read
http://www.dba-oracle.com/art_9i_indexing.htm - Performance tuning with index
http://www.orafaq.com/node/855 - Common errors using Outer Join
http://marist89.blogspot.com/ - Wonderful quote from the site - "The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt." --Bertrand Russell. A good database blog
http://www.orafaq.com/forum/t/54714/0/ - Batch delete
http://www.dba-oracle.com/oracle_news/2004_1_19_using_function_based_indexes%20.htm - Function based index
http://www.oracle-dba-online.com/sql/oracle_table_partition.htm - Partitioned table
Subscribe to:
Posts (Atom)