A: Can you tell me something about you ?
P: I have 2.5 yrs. experience as Oracle DBA and approximately 5 yrs, on Oracle RDBMS. I have worked on Oracle versions from 5.0 to 7.2. My overall experience in the S/W field is over 8 yrs.
I have worked on various platforms as PC, Mini computers and mainframe computers. Besides
administration of Oracle RDBMS, I have worked on other large databases in various capacities
from developer to project leader.
A: What was your role in BAYBIS ?
P: My role was DBA. I briefly told him about the nature of the application and the work I had done.
A: Were you involved in database modeling ?
P: Yes, I was involved in modeling and responsible for database schema creation for BAYBIS.
A: Have you done tuning in BAYBIS ?
P: Yes. At the design time itself, the database was designed with regard to performance aspects.
In production stage, the performance of the system was continuously monitored and corrective steps were taken to give the best performance.
A: What are the init parameters that you tuned ?
P: I mentioned a few parameters such as database buffer cache, shared pool & log buffer.
A: What is the use of shared pool ?.
P: It has two parts Library cache and data dictionary. Library cache stores all SQL statements & stored procedures issued by users. Data dictionary stores all oracle dictionary tables. Basically this pool is shared by all users connected to the system. In multi-threaded architecture shared pool also stores private SQL area containing session information.
A: Have you tuned database buffer cache ?
P: Yes, based on the usage of database buffer cache at peak hours, I tuned the size.
A: How did you arrive at the buffer cache size ?
P: We can make out from the hit ratio on DBS buffer cache.
A: How hit ratio is used to determine ?
P: If the hit ratio is near 100%, one can conclude that the buffer cache size is sufficient. If the hit ratio is low, contention for buffer cache can be reduced by increasing the buffer cache size.
A: How do you arrive at the optimum size of buffer cache ?
P: Before increasing the size of buffer cache, one can monitor the effect of increased size by using two dynamic tables, named, X$KB… By comparing the hit ratio manipulated for various buffer cache sizes, one can arrive at optimum size which will contain blocks no more than required.
A: Have you worked on Distributed data processing ?
P: I have worked on multi-threaded system with SQL*Net 2.0. I configured parameters for network connection. For Distributed, we need to create database link to connect to the target database.
A: How will you handle if one site has problem in transaction ? How will you resolve it ?.
P: There is a concept called two-phase commit. In the first phase, the server makes sure that all sites are available for transaction. If all are available, then the commit/rollback phase occurs. If any transaction is held up due to unavailability of resources in the target site, the transaction is named in-doubt transaction and data is stored in the data dictionary of the target site. This will be later committed/rolled back by ‘RECO’ background process. DBA can also see the status of in-doubt transaction and based on the comment given along with commit/rollback, he can take commit/rollback manually.
A: what was your backup strategy ?
P: Everyday I used to take hot backup along with archived redo logs. Besides, I used to take ascii text backup for a few important tables using exp command.
A: Have you done database recovery ?
P: Yes. 5 times I have recovered database, mostly data files using archived redo logs and old data files.
A: Have you done trigger based applications ?
P: Yes, I have done many applications using DBS triggers. If you are specific, I can tell you more.
A: Have you handled DML statements using triggers ?
P: Yes. I mentioned about the use of DBS triggers in one of my application.
A: How would you handle duplicate rows in a table ? I want to find out the duplicate rows. How to go about it using primary key?
P: When one uses alter table command with exceptions clause, the duplicate row information will go into a pre-defined table structure.
A: It’s okay with you. Do you have any questions ?
P: Yes, I want to know more about the client and nature of application.
A: He told about the client’s business and the type of applications using the database.
P: Okay. Thanks. No more questions.
A: Okay. We will call you back. Thanks. Bye.
P: Thanks. Bye.
P: I have 2.5 yrs. experience as Oracle DBA and approximately 5 yrs, on Oracle RDBMS. I have worked on Oracle versions from 5.0 to 7.2. My overall experience in the S/W field is over 8 yrs.
I have worked on various platforms as PC, Mini computers and mainframe computers. Besides
administration of Oracle RDBMS, I have worked on other large databases in various capacities
from developer to project leader.
A: What was your role in BAYBIS ?
P: My role was DBA. I briefly told him about the nature of the application and the work I had done.
A: Were you involved in database modeling ?
P: Yes, I was involved in modeling and responsible for database schema creation for BAYBIS.
A: Have you done tuning in BAYBIS ?
P: Yes. At the design time itself, the database was designed with regard to performance aspects.
In production stage, the performance of the system was continuously monitored and corrective steps were taken to give the best performance.
A: What are the init parameters that you tuned ?
P: I mentioned a few parameters such as database buffer cache, shared pool & log buffer.
A: What is the use of shared pool ?.
P: It has two parts Library cache and data dictionary. Library cache stores all SQL statements & stored procedures issued by users. Data dictionary stores all oracle dictionary tables. Basically this pool is shared by all users connected to the system. In multi-threaded architecture shared pool also stores private SQL area containing session information.
A: Have you tuned database buffer cache ?
P: Yes, based on the usage of database buffer cache at peak hours, I tuned the size.
A: How did you arrive at the buffer cache size ?
P: We can make out from the hit ratio on DBS buffer cache.
A: How hit ratio is used to determine ?
P: If the hit ratio is near 100%, one can conclude that the buffer cache size is sufficient. If the hit ratio is low, contention for buffer cache can be reduced by increasing the buffer cache size.
A: How do you arrive at the optimum size of buffer cache ?
P: Before increasing the size of buffer cache, one can monitor the effect of increased size by using two dynamic tables, named, X$KB… By comparing the hit ratio manipulated for various buffer cache sizes, one can arrive at optimum size which will contain blocks no more than required.
A: Have you worked on Distributed data processing ?
P: I have worked on multi-threaded system with SQL*Net 2.0. I configured parameters for network connection. For Distributed, we need to create database link to connect to the target database.
A: How will you handle if one site has problem in transaction ? How will you resolve it ?.
P: There is a concept called two-phase commit. In the first phase, the server makes sure that all sites are available for transaction. If all are available, then the commit/rollback phase occurs. If any transaction is held up due to unavailability of resources in the target site, the transaction is named in-doubt transaction and data is stored in the data dictionary of the target site. This will be later committed/rolled back by ‘RECO’ background process. DBA can also see the status of in-doubt transaction and based on the comment given along with commit/rollback, he can take commit/rollback manually.
A: what was your backup strategy ?
P: Everyday I used to take hot backup along with archived redo logs. Besides, I used to take ascii text backup for a few important tables using exp command.
A: Have you done database recovery ?
P: Yes. 5 times I have recovered database, mostly data files using archived redo logs and old data files.
A: Have you done trigger based applications ?
P: Yes, I have done many applications using DBS triggers. If you are specific, I can tell you more.
A: Have you handled DML statements using triggers ?
P: Yes. I mentioned about the use of DBS triggers in one of my application.
A: How would you handle duplicate rows in a table ? I want to find out the duplicate rows. How to go about it using primary key?
P: When one uses alter table command with exceptions clause, the duplicate row information will go into a pre-defined table structure.
A: It’s okay with you. Do you have any questions ?
P: Yes, I want to know more about the client and nature of application.
A: He told about the client’s business and the type of applications using the database.
P: Okay. Thanks. No more questions.
A: Okay. We will call you back. Thanks. Bye.
P: Thanks. Bye.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.