#1
  1. Contributing User
    Devshed Novice (500 - 999 posts)

    Join Date
    Aug 2002
    Location
    Queensland, Australia
    Posts
    827
    Rep Power
    142

    DECIMAL or multiplied INT


    I'm dealing with a DB (not mine) where a column stores a numerical value.

    The value may be a count of something, making an INT an appropriate type. But it may also be monetary, where DECIMAL would be the more appropriate type.

    What is done currently is that monetary values are stored in cents (i.e. they are multiplied by 100). So the column can be of type INT.

    But I'm half curious, if you know the number of decimal places it so bad to use a INT multiplied by 100, versus a DECIMAL with 2 decimal places? For example, is it so bad to store 12345, instead of 123.45, so long as you know to divide 12345 by 100? Or even divide by 1000, or 10000 depending on the precision needed? Or are their hidden consequences with this?
    Ooh, they have the Internet on computers now!
  2. #2
  3. No Profile Picture
    Still Learning
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Location
    Montreal, Canada
    Posts
    55
    Rep Power
    39
    Take a jump to the left and not directly answering your question.

    Why are they using the field for two purposes. How will they know which interpretation to use. This gives me the heebeejeebees. I can't immediately think what apocolypes this will bring on but it seem to be a bad data design.

    Give use more detail as to why this is going on so maybe your original question will be moot.

    Answering it anyways.... If the variable name is call Cash_in_hundredthofacent
    There would be no problem.
  4. #3
  5. Contributing User
    Devshed Novice (500 - 999 posts)

    Join Date
    Aug 2002
    Location
    Queensland, Australia
    Posts
    827
    Rep Power
    142
    Originally Posted by admiraln
    Take a jump to the left and not directly answering your question.
    That's understandable. A redesign is impossible, but I enjoy what-ifs, which is why I asked this question.

    The table has about 6 columns that make up a unique key, and then the value. The 6 columns are all foreign keys to resources in other tables. One of those is a resource that defines what the value is, or represents. So we can join to that table and identify if it's monetary or count.

    The alternatives I see is to have two value columns, one for INT, and the other DECIMAL, or two tables that are identical beside the datatype for the value. I think its pretty obvious what the side effect of that will be. The application needs to be able to query multiple values for multiple resources, splitting them will require use of UNIONs to get the full results it requires.

    It's pointless me describing exactly what this table does, as it's abstract and has little real world analogy.

    But when making a judgement, you might consider the situation where a product that puts its settings in a 'settings' table with two columns, `settings`.`name`, `settings`.`value`.

    Now, some settings within the product might store a string, some an integer, others a boolean. And some might be best represented buy an enum, or set.

    I think with that example, you can see that the enforcing the correct DB design makes for a complicated set of tables. And when you think about it, what is the real world benefit? Splitting one table into many doesn't negate the need for the application to be very aware of what the values represent--it's not really normal to wrap application polymorphism around DB datatypes.

    Rest assured, with the application I'm referring to, it knows when to treat a value as currency, or an integer. And these are the only two types it ever needs concern itself with.

    The value can be large, they will sometimes crack 2 billion, so techically, we are using a BIGINT (unsigned), not an INT. And there are lots of rows--I think in some setup we've cracked 60 million. We don't select based on value. So there is no indexing on the value.

    But take away the context of this DB, and consider the very simple question, where an INT "can" be used to represent a DECIMAL, what are the pros/cons of doing so?
    Ooh, they have the Internet on computers now!
  6. #4
  7. No Profile Picture
    Still Learning
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Location
    Montreal, Canada
    Posts
    55
    Rep Power
    39
    Well then look at implementation of decimal
    http://msdn.microsoft.com/en-us/library/ms187746.aspx
    17 bytes gets you 38 digits.

    Bigint
    http://msdn.microsoft.com/en-us/library/ms187745.aspx
    8 bytes gives you 19 digits

    Oracle is similar
    http://www.techonthenet.com/oracle/datatypes.php
    Last edited by admiraln; January 25th, 2013 at 09:19 AM.

IMN logo majestic logo threadwatch logo seochat tools logo