Chris Stephens
2018-09-13 21:17:19 UTC
we have an application that will be deployed at an unknown number of
locations. the application requires the ability to ingest data from every
location into a central repository. the application will also store data in
non-oracle databases at some locations. all locations will have the same
schema. all surrogate keys for all tables across all locations have to be
unique. some of these locations running Oracle will be "small" and some
will require beefy RAC systems to support the workload.
2 methods of satisfying this requirement are being considered:
1) use a multi-column or concatenated column PK with some sort of "site
id" + sequence based values for oracle systems and figure out the
equivalent in the other database products. To allow for scalability on busy
RAC systems I currently plan to recomment a site_id + inst_id +
mod(session_id, <some number>) + sequence value.
2) use UUID's generated in the application code for all surrogate keys
everywhere.
I'm being asked to weigh in on the advantages/disadvantages of both. Option
1 is ideal from an Oracle perspective if performance is primary concern
(there will be locations that will stress any hardware we throw at it).
Option 2 is preferred by the developers and I think is functionally fine.
I think the primary concern for option 2 is the completely random nature of
UUIDs and the fact that all these PK/Unq indexes will need to be fully
cached to maintain performance which will stress oracle's buffer management
and result in far more physical I/O for inserts as well as for other
database activity that gets aged out of buffer cache. I have explained that
concern as best I can but the developers are (justifiably asking for more
concrete answers other than "this will likely become a problem at some
point if the application activity exceeds the ability of Oracle, on the
current hardware, to maintain active blocks in the buffer cache". I am
going to try and come up with a test to demonstrate the degraded
performance on our current 3-node 12.2 RAC system (child's play for what
eventual production and test systems will be). Does anyone have suggestions
on how to best set up that test and demonstrate the downfalls of UUID
approach at some currently unknown scale? Also, does anyone have any
experience using externally generated UUID's as surrogate keys in Oracle?
Good/bad/indifferent?
==
==
I'm most certainly reaching out to oracle-l more than ever before as these
are problems I've never had to deal with and truly appreciate all the
people that take the time to chime in.
Thanks!
Chris
locations. the application requires the ability to ingest data from every
location into a central repository. the application will also store data in
non-oracle databases at some locations. all locations will have the same
schema. all surrogate keys for all tables across all locations have to be
unique. some of these locations running Oracle will be "small" and some
will require beefy RAC systems to support the workload.
2 methods of satisfying this requirement are being considered:
1) use a multi-column or concatenated column PK with some sort of "site
id" + sequence based values for oracle systems and figure out the
equivalent in the other database products. To allow for scalability on busy
RAC systems I currently plan to recomment a site_id + inst_id +
mod(session_id, <some number>) + sequence value.
2) use UUID's generated in the application code for all surrogate keys
everywhere.
I'm being asked to weigh in on the advantages/disadvantages of both. Option
1 is ideal from an Oracle perspective if performance is primary concern
(there will be locations that will stress any hardware we throw at it).
Option 2 is preferred by the developers and I think is functionally fine.
I think the primary concern for option 2 is the completely random nature of
UUIDs and the fact that all these PK/Unq indexes will need to be fully
cached to maintain performance which will stress oracle's buffer management
and result in far more physical I/O for inserts as well as for other
database activity that gets aged out of buffer cache. I have explained that
concern as best I can but the developers are (justifiably asking for more
concrete answers other than "this will likely become a problem at some
point if the application activity exceeds the ability of Oracle, on the
current hardware, to maintain active blocks in the buffer cache". I am
going to try and come up with a test to demonstrate the degraded
performance on our current 3-node 12.2 RAC system (child's play for what
eventual production and test systems will be). Does anyone have suggestions
on how to best set up that test and demonstrate the downfalls of UUID
approach at some currently unknown scale? Also, does anyone have any
experience using externally generated UUID's as surrogate keys in Oracle?
Good/bad/indifferent?
==
==
I'm most certainly reaching out to oracle-l more than ever before as these
are problems I've never had to deal with and truly appreciate all the
people that take the time to chime in.
Thanks!
Chris