![]() ![]() Version 1 UUID stores timestamp info, could be useful sometimes.Secure since malicious user can't guess the ID.Stateless, it can be generated on the fly.During the ETL processing, we only need to track the file ID in the master database, which support auto incremental ID, when we need to generate record ID when processing file, we combine FileID+reserve 1 byte+4 bytes rowID. We use the rest of 4 bytes as row id, which is 4 billion rows, we don't have 4 billion rows in any file. We use 8 bytes bigint as primary key (actually unique key in our system is not supported either, but that's not affect logical uniqueness), when we processing file and load file, we use 3 bytes to generate file ID, which is 2^24 files, we have about 2,000 files need to load per day, so, 2^24 can support about 25 years, if it is not wrong. We have 300 Billion records saved in parallel data warehouse, in our system, auto incremental key even not supported. Share our project experience as a reference. Instead of using UUID or GUID, please create sequential surrogate key in your master database or in your data pipeline. Once you have a big distributed data warehouse, if you use UUID or GUID as unique key and use it in join later on, it is not good. It might also be worth noting that the master (online) database uses MySQL (InnoDB) and the distributed (offline) databases use SQLite.Ĭonsidering that it is perhaps better to have the UUID as a primary key (as that's semantically what it is), would I gain the benefit of sequential inserts if I set the UUID as a primary key and the AUTO_INCREMENT column as a non-null unique index? Or is it only the primary key that is of relevance when determining where to insert a new row? Are there drawbacks to using a (non-null) unique index, rather than primary key, for these references (and, of course, the JOINs that will come with them)? ![]() Semantically, it makes more sense for the primary key to be the reference, but as its a distributed system we can't use AUTO_INCREMENTS for these. a list of problems attached to an inspection which in turn is attached to a site, all of which are involved in inserts and so all of which require UUIDs). ![]() The one issue I can see with this is that the UUID needs to be used as a reference (using foreign key constraints) to other tables (i.e. My question is whether or not it is a better idea to use an AUTO_INCREMENT column as the primary key and then have the UUID column as a non-null unique index? Presumably this will have the speed benefits of sequential inserts whilst retaining the necessary UUIDs required for synchronizing distributed databases. The benefit to AUTO_INCREMENT is that new rows will usually just be added to the end of the table and so will not run into the speed problems with UUIDs. The problem I've learned after researching is that the time required for non-sequential primary key inserts will increase over time and that these inserts will lead to fragmentation (as answered here). For this we'll be storing the UUID as a BINARY(16) primary key. As such we require the use of UUIDs to allow for the necessary two-way synchronization with the master database. ![]() We're building a new web app that will have an offline iPad/Android app version on a number of local devices that will involve inserts of new data. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |