August 26th, 2005, 04:43 PM
Some performance discussion...
Well, I'm having some concerns about a function I'm using to get some fields in MySQL.
TupleReduce, transform a tuple into an alphabetically sorted List and then counts only *diferent* items on that List.
k = fun.Util() # My "only pogram" useful class
z = brain # My generic functions module
lista = ['marca','modelo','proveedor','clasificacion'] #Fields in MySQL to select
lista_final =  #The returned list where results are appended
for item in lista:
get = k.Select("Linarticulos",item) #Select fields function
total = str(z.TupleReduce(get)) #I'll explain it down
["hi","HI,"Hi","yes"] returns 2
My discussion is:
In my fun.Util() I have one conection to MySQL in the __init__(self) so all classes can use the conection if needed. Ok, So one conection. But as you can se in the function, I make several "SELECT" MySQL commands because the select k.Select function is in the for cicle.
My question is: If my table fields
lista = ['marca','modelo','proveedor','clasificacion']
had 10.000 (Is this to much? now only has 4) rows of data in them how could the fact that I make 4 SELECTs affect performance?
Maybe I should consider changing the k.Select function to allow selecting more than one field... so to only make one.
But this is the only way I can think it would be moreless generic:
k.Select(table,Field1=None, Field2=None, Field3=None, Field4=None)
So, what about the performance issue? I'm a newbie, so I *really* dont have a clue.
August 26th, 2005, 05:07 PM
as performance issue I uderstand that speed
and as I never worked with MySQL things,
the slowest part of your program is the user...
Those people who think they know everything are a great annoyance to those of us who do.
August 27th, 2005, 05:13 AM
I think this is more of a MSQL problem than a Python one.
If you run this function once a month then no problem, but if you run it every time a record is modified this looks like it will hit your computer and the database big time.
Since all you are doing is determining image sizes why not manage two databases the second containing the image references and their size. Both databases would be modified as you add images.
Just ideas but I'm no MySQL expert
August 27th, 2005, 05:36 AM
Quick rule of thumb - the less queries the better.
From what I gather you want all fields where lista is one of ['marca','modelo','proveedor','clasificacion'] right?
If so, use IN, and this should be fairly fast ( but you'll probably want to check your indexes ):
lista IN ('marca', 'modelo', 'proveedor', 'clasificacion')
On any real machine ( = decent hardware ), mysql should be able to chuck around / search / edit / etc ~100,000 rows, without breaking a sweat (let me put that another way - 10,000 rows in a table is considered small to average).
However, this is largely dependant on your indexes ( ie: updates into a field with an index are slower ), and the size of the fields you're handling, and the various key/sort buffers you've got.
What exactly are you trying to do?
Last edited by SimonGreenhill; August 27th, 2005 at 05:43 AM.