Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
mysql, pc hardware, and an 11 million row database
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Tue Feb 10, 2004 9:13 am    Post subject: mysql, pc hardware, and an 11 million row database Reply with quote

As usual a simple server tuning contract job turned into so much more.

Mysql 4.0.x, Gentoo Linux, 2GB RAM, 3 disk raid5, on dual 1GB PIII. It just isn't cutting it though the data and index files can fit into RAM, 0.9GB total. I believe the main table has too many rows and any SQL quey against it is going to be slow, around 30 seconds in our case.

The db is a demographics type thingy with the main table beinrg 2 columns of intergers coming in at 11 million rows. Most queries end up with a left join to this table to get the demographics data and then figure out which email to send to and what not.

Here's a simplified look at big table. userid is going to a user, such as user 10, and demogid is each option userid 10 checked in the survey. 1-400 demogID's and about 200k users.
Code:

userID    demogID
10            1
10            3
10            4
11            2
11            3
11            4

My thinking is that the above is completely retarded. My index options are nil and rumor is that Postgres and Mysql don't do well above 5 million rows. Assuming all that is true the new table would look like:
Code:

userid    demog_1   demog_2   demog_3   demog_4
10         1         0          1           1
11         0         1          1           1

demog_x would probably go up to 500 and then tie into a separate table to match what the column as as they get added. We will most likely add 100 columns at a time to make administration easy.

So on the actual question. Is having more columns going to be a better solution than having the 11 mil rows in mysql? Ideally I'd like this solution to scale to 1-2 million users with 1-2k demog options. If my idea sucks how would you, as someone who has run a db with more than 1 million rows, go about doing this?
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
iamarug
Apprentice
Apprentice


Joined: 09 Feb 2003
Posts: 220

PostPosted: Tue Feb 10, 2004 8:30 pm    Post subject: Reply with quote

Those solution both require only a single select statement to get the info you need so on that side it should be okay. However, it is hard to tell which one uses more storage because that would involve having to figure out how many entries each user would have in the first type of structure (on average). I would probably go for the 1st way you had it. If worst comes to worst, it should be fairly trivial to write a script to change the tables and retain all the data.

HTH
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Tue Feb 10, 2004 10:21 pm    Post subject: Reply with quote

I believe it's more complicated than that. Doing a full table scan over 11 million rows is more work than doing it over 2 million even with the added weight of adding 1000 columns. The new method results in a bigger table space, but possibly better performing queries. Also I have better indexing options with the newer method... or so it appears.

That's my current theory which is pretty theoretical, I don't have the exp to say one way or another, which is what I'm looking for. Also the current db takes 30+ secs to return an answer and the client is looking for < 5 sec or better for scaling towards 1-2 million userID's or 50-100 million rows assuming we change nothing. With roughly the same performance with some added hardware. SBC's (the phone company) billing db is roughly 50 million rows and ranks in the top 10 Oracle installations, which might indicate that the current db design sucks and does not scale.

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
myuser
Apprentice
Apprentice


Joined: 31 Jan 2004
Posts: 218

PostPosted: Tue Feb 10, 2004 10:27 pm    Post subject: Reply with quote

Alternatively you could use a 4 bit data value.

e.g.
Code:

userid      demog
10         1011
11         0111

of course you would then have to decode the 4bit value.
Back to top
View user's profile Send private message
iamarug
Apprentice
Apprentice


Joined: 09 Feb 2003
Posts: 220

PostPosted: Tue Feb 10, 2004 10:56 pm    Post subject: Reply with quote

If one where to use 4 bit data types, could you then query based on one of the bits or would you have to do bit operaiton outside of mysql?

Kashani: I think you have a good point in theory :D I guess if you could do some tests on your data set (or something similar) you could come to a better decision.

Btw, if you do some tests, please let us know what results you get!

good luck
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Tue Feb 10, 2004 11:16 pm    Post subject: Reply with quote

I don't think that's a workable idea either as they've got 300 odd demogID fields to start off. While my knowledge of indexing theory is incomplete I am however quite sure that indexing int(400) is a really bad idea.

Here's a better idea of something I've been considering. This is what the actual data might represent..
Code:


userID   demogID
10         1 - male
10         5 - 20-30 years old
10         74 - lives in Maryland
10         101 - married
10         114 - $40-50k/year
10         201 - soccer
10         202 - hockey
10         203 - martial arts
11         2 - female
11         6 - 50-60 years old
11         75 - live in Mass.
11         119 - $90-100k/year
11         0 - didn't fill in any intrests


To represent all the above I can do it the current way, which we can safely say doesn't scale or come up with a different plan. Possibly breaking up the data into the following.

Code:

userID    demo_sex    demo_age   demo_state   demo_inc   demo_intrests
10          1           3            24           4        1,2,3
11          2           5            25           9        0


This latest idea is more work in converting the data and the SQL. Also there'd need to be some tables with what the options and whatnot mean, but is probably much more effcient than having one column for each variable. I'd also needs to sit down with the developer and come up with some sort of system for designating single and mutiple value columns, how we'd index them, and look at most of the queries they've used in the past.

As before the problem is that largest db I've administered was never larger than 1 million rows, which while large doesn't have some of the interesting issues that seem to be appearing at 11 million. As of right now I'm not sure what's going to get me more performance and what's mere rumor and voodoo. :?

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
iamarug
Apprentice
Apprentice


Joined: 09 Feb 2003
Posts: 220

PostPosted: Wed Feb 11, 2004 1:55 am    Post subject: Reply with quote

have you actually timed any queries on 11 million rows? I think your design decisions may be unnecessarily burdensome if mysql can handle 11m rows "fast enough".
obviously, there is certain items (like sex) which may only have one entry (I hope :? ). Furthermore, it seems you would have the information on several items for everyone. With the previous two facts in mind, it may be a good decision to include such columns in the actual table while having the other demographic items in a seperate table.

btw, out of curiousty, where did you get your nick from?
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Wed Feb 11, 2004 2:53 am    Post subject: Reply with quote

In post #3 I mentioned it takes 30+ seconds to respond. We'd like < 1 sec, but would be happy with < 5 sec as long as it scaled to 1-2 million users.

This is on the new db server doing nothing other than Mysql. I tuned mysql with the my-huge.cnf and the usual bit of "you're got 2GB of memory, now use it" parameters. I'm feeling pretty confident that the server doesn't have any more performance in it without changing the database schema. I thought of pushing it to 4GB of ram, but the entire db with indexes is only 0.9GB and that's about what I use when I run the query.

Think about it this way. I give you a subject, author, and title library card catalog all schmushed together into one system. And then I tell you it's isn't in any particular order and make you do searches. You now have the equivalent of the 11 million row db where all your seaches have to go through the entire table. Instead we break the card catalogs into 3 "columns" author, subject, and title. Now you can do searches in a much smaller area. We can also add in Science card catalogs and Fiction card catalogs as well. Our searches have much fewer records to check by partitioning our data intelligently.

I'm fairly certain the above describes what's going on, but I'm not sure what's the best way to go about fixing it. If you're interested you should read O'Reilly's Oracle Tuning Guide. It has a number of interesting case studies, but wasn't general enoguh to help me when I read it last night. I'm taking DB Design for Mere Mortals home tonight which looks like it might be the right book.

Kashani is the name of my Grandmother's family in Iran. Yes those Kashani's if you're familiar with 20th century Iranian politics. And it's a type of carpet which I bet is why you're asking. :)

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
iamarug
Apprentice
Apprentice


Joined: 09 Feb 2003
Posts: 220

PostPosted: Wed Feb 11, 2004 5:29 am    Post subject: Reply with quote

Actually, I did not ask for the carpet thing. I am from Iran so it was interesting to see you nick. Anyways, sorry I didnt catch the 30 second thing.
What kind of query is it? For all of the records in the table? Maybe if you give a general idea of your query some others could help you out.
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Wed Feb 11, 2004 9:11 am    Post subject: Reply with quote

Dude, yer killing me over here. Any query against a two column table has to do a full table scan. A full table scan of 11 million records will take a long ass time. You can index your table, but this setup is so bad that the index gets you almost nothing.

This is striclty a design problem brought on by having too many rows, no columns for all practical purposes, and lame indexing, thus spake the 2 DBA's I went drinking with tonight. I think the kindest thing anyone said was, "well I guess it does work in a completely lobotomized way. What are you moving them to next? A flat file."

iamarug, are you still in Iran or somewhere else? I'm in Los Angeles.

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
To
Veteran
Veteran


Joined: 12 Apr 2003
Posts: 1145
Location: Coimbra, Portugal

PostPosted: Wed Feb 11, 2004 10:07 am    Post subject: Reply with quote

You will only have one line for each user if you use, the 2nd way you have shown. Course like you hav sayd any query to more than 1 clum will be slower, yes, and for each more colum even more and so on. I beleave that you should try to use posgresql. MySQL is faster on smaller db's but postgesql is faster on huge db's. It used to be this way, can't be sure but you should try.



kashani wrote:

Code:

userID    demo_sex    demo_age   demo_state   demo_inc   demo_intrests
10          1           3            24           4        1,2,3
11          2           5            25           9        0

As before the problem is that largest db I've administered was never larger than 1 million rows, which while large doesn't have some of the interesting issues that seem to be appearing at 11 million. As of right now I'm not sure what's going to get me more performance and what's mere rumor and voodoo. :?

kashani


kashani wrote:
Dude, yer killing me over here. Any query against a two column table has to do a full table scan. A full table scan of 11 million records will take a long ass time. You can index your table, but this setup is so bad that the index gets you almost nothing.

_________________

------------------------------------------------
Linux Gandalf 3.2.35-grsec
Gentoo Base System version 2.2
------------------------------------------------
Back to top
View user's profile Send private message
iamarug
Apprentice
Apprentice


Joined: 09 Feb 2003
Posts: 220

PostPosted: Wed Feb 11, 2004 3:27 pm    Post subject: Reply with quote

Damn. I cant see why you would need a full table scan. Why wouldnt a user ID index help? I dont think any database can do an 11 million row full table scan in less than 5 seconds on your hardware so this is a design problem. But seriously, why doesnt an index on the userid help?

ps: I live in chicago
Back to top
View user's profile Send private message
screwloose
Tux's lil' helper
Tux's lil' helper


Joined: 07 Feb 2004
Posts: 94
Location: Toon Town, Canada

PostPosted: Wed Feb 11, 2004 5:19 pm    Post subject: Reply with quote

Code:


userID    demo_sex    demo_age   demo_state   demo_inc   demo_intrests
10          1           3            24           4        1,2,3
11          2           5            25           9        0



This would definately be a better way to organize your data because it would allow you to set better indexes. There is a problem with the above table layout and that is that the demo_intrests field is multi-valued, that is immedidiately a sure sign that this table should be split into two tables like so:

Code:


Table: Users
userID    demo_sex    demo_age   demo_state   demo_inc
10          1           3            24           4       
11          2           5            25           9       

Table: Intrests
userID    demo_intrests
10          1
10          2
10          3


This will reduce the size of the Users table and should speed up queries where you are matching specific intrests. This will make the database a bit larger due to the additional userID field in the Interest table but its should help efficiency in the long run. Odds are there are other sets of fields you can split into other tables. Don't forget to make indexes on most of the fields you would be doing table joins on.
_________________
If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Wed Feb 11, 2004 7:23 pm    Post subject: Reply with quote

Screwloose,
After getting expert advice last night and being somewhat introduced to normalization the mutiple columns method is the logical solution. I'm still a bit unsure how to deal with data where the user can check mutiple fields. Probably by creating a separate table for each set of interests. The car table, the sports tables, etc and have a column for each option they can choose. I don't think any of them have more that 10-20 separate choices. Mostly it's a matter of learning the basic techniques your average DBA already has for this stuff the firther I get into it.

Having bounced this question off several people the past few days I learned a number of things. One of them is that the average Linux/BSD admin/user/developer does not understand databases. Not criticizing anyone who posted here, you all really helped me get my head around the problem. But database theory/design/whatever is a real blind spot for many, many usually smart people.

Kashani's top 8 things to know about databases
1. Postgres = Mysql If anyone can prove that there is more than 5% of difference between these db's with reproducible lab results, I'll buy you sushi.

2. Swapping out your db in a production system is hard. Don't even bring up switching to your favorite database unless there is concrete data to support the decision. See also #1.

3. "Design is the #1 influence on performance in your database." - DB Design for Mere Mortals

4. One set of data per column. Integers are easier to sort than ascii. Use that "many to many" thing to avoid ascii.

5. The word relation in relational databases refers to relation, part of set theory, the mathematical basis of relational databases. Not becuase of any relationship implied between data.

6. Flat files breakdown at 2-3k rows, Mysql/Postgres breakdown at 5-10 mil rows. Everything has a limit, understand your database's.

7. "Blindly upgrading hardware in the face of performance problems will usually make your database slower because of increased contention" - O'Reilly's Oracle Tuning Guide

8. Remember Amdahl's law

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
screwloose
Tux's lil' helper
Tux's lil' helper


Joined: 07 Feb 2004
Posts: 94
Location: Toon Town, Canada

PostPosted: Wed Feb 11, 2004 10:20 pm    Post subject: Reply with quote

I'm one of those bizarre jack of all trades where I have been trained to be an Oracle DBA, Linux/Novell/Windows Network Admin, and programmer(too many languages) but have somehow found myself mostly building web apps with php using pretty much any database server my clients have. Your top 8 list is pretty much bang on.

The one thing I would comment on having used both MySQL and Postgres is that for speed most users it won't matter but sometimes if a person is doing a more advanced query Mysql won't be able to run it do to lack of support for some forms of joins. Again there are ways around that problem but I'm not going to get into that. Most of the time I do work with MySQL so I've learned how to get around its quirks.

Anyway back to your problem, the best way of handling multiple choices is using a seperate table for each choice, possibly with an additional lookup table to later determine what the choices are.

Here are some really basic tables that hopefully give you the right idea
Code:


Table: user_sports
UserID    interests_sportsID
1             1
1             3
2             2
2             3

Make both fields part of the primary key

Table: sports
interests_sportsID    sport_name
1                    Hockey
2                    Baseball
3                    Basketball

make interests_sportsID the primary key



The above assumes that you still have a User table something like what was in my last post

Now the part that makes working with multiple tables more complex is that it now to do an insert it will require an insert statement for each table that you are adding information to.

In this case if you were to add a user, you would first add the user to User table then you would add a record to the user_sports table for each sport that they are interested in. For the tables I have above, User 1 is interested in hockey and basketball so I put in the ID's that correspond to the User and the sport into the user_sport table.

The sports table would almost never be added to except if you wanted to increase the choices available. Most of the time it would be used in select queries to provide you with the name of the sport rather than number. These selects should be quite fast because all the comparisons to get the names of the sports should use the ID field which is numeric and a key field.

whew.... I've never tried to teach anyone db design through a msg board before.

A quick warning that it is possible to normalize a database too far, a sign of that is usually when you end up with tables that only contain a single field.

If your going to be doing alot of database work I highly recommend getting a good book on SQL that is vendor neutral that contains a chapter or two on how to normalize data to the various normal forms. Sybex Mastering SQL by Martin Gruber is a good SQL reference for writing all sorts of queries but the normalizing chapter sucks.

Hopefully this post helps
_________________
If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law


Last edited by screwloose on Wed Feb 11, 2004 11:57 pm; edited 1 time in total
Back to top
View user's profile Send private message
screwloose
Tux's lil' helper
Tux's lil' helper


Joined: 07 Feb 2004
Posts: 94
Location: Toon Town, Canada

PostPosted: Wed Feb 11, 2004 10:35 pm    Post subject: Reply with quote

Quote:

6. Flat files breakdown at 2-3k rows, Mysql/Postgres breakdown at 5-10 mil rows. Everything has a limit, understand your database's.


The limits for Mysql and Postpres are much larger, the following link mentions that Mysql was designed to handle databases with 10-100 million rows in mind:

http://www.mysql.com/doc/en/Compatibility.html

and from personal experience that sounds about right, of course I'm sure that assumes normalized tables, proper indexes and having the server config tuned to your application
_________________
If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law
Back to top
View user's profile Send private message
iamarug
Apprentice
Apprentice


Joined: 09 Feb 2003
Posts: 220

PostPosted: Wed Feb 11, 2004 10:43 pm    Post subject: Reply with quote

I think you guys seem to have found your solution. One other thing that concerned me (dont know if it is really important) was that if you were to break up all of these things into different tables, then in order to get your information, you would have to do a join across a ton of tables. Your case is a many to one normalization. If I had been in your situation, I would have created the following way:
recordid userid demographicid | demographicid demographicdata
0 1 2 | 0 male
1 3 3 | 1 female
2 3 12 | 2 unemployed

I would include demographic data that all people have (such as sex etc...) in the userid table.

Then I would have constructed the index on the userid field. This would seem to make alot of sense to me and I would expect my results from queries in a short period of time (much much much less than 30 seconds).

I still dont know why this solution would not work!

I am glad you have found a solution that suites you
Back to top
View user's profile Send private message
screwloose
Tux's lil' helper
Tux's lil' helper


Joined: 07 Feb 2004
Posts: 94
Location: Toon Town, Canada

PostPosted: Wed Feb 11, 2004 11:56 pm    Post subject: Reply with quote

I'm not sure but I think you didn't quite read my example right or maybe I wasn't clear enough about the relationship I described above but it is actually a many to many relationship. (ex. there are many users each of which may have 0 to many sports associated with them)

The most efficient way of handling that situation as far as I know is with multiple tables. I do agree any field that only has one value at a time (ex. the user's age) the field should remain in the user table.

Table joins aren't an overly expensive operation as long as you use proper indexes and they can add alot of flexibility to an application.
_________________
If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Wed Feb 11, 2004 11:57 pm    Post subject: Reply with quote

screwloose wrote:


The limits for Mysql and Postpres are much larger, the following link mentions that Mysql was designed to handle databases with 10-100 million rows in mind:

http://www.mysql.com/doc/en/Compatibility.html

and from personal experience that sounds about right, of course I'm sure that assumes normalized tables, proper indexes and having the server config tuned to your application


Perhaps it's best to say that the rules of the game change significantly at 5-10 million plus rows, which might account for the fact that many people seem to have problems at those levels.

I did get to demostrate the flat file issues to my jr engineer recently when he took down a cluster of 4 webservers with only 200 http requests. Turns out he had added 20k lines of mod_rewrite rules to the config.
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Thu Feb 12, 2004 12:00 am    Post subject: Reply with quote

Screwloose,
same boat as you though I came from a Cisco/ISP/NSP background. These forums have it all, db design just being the latest. First time for everything I suppose. :) But this is exactly what I'm looking for, the voice of experience... "no you don't want to do that way kid, you'll put your eye out."

I agree with your points on the Mysql/Postgres debate. The current developer is actually more familiar with Postgres than Mysql, but there is too much code and business logic to start fresh. We both talked about it and decided to stick with Mysql for the time being. Though if we do a complete rewrite he'd prefer Postgres and I see no problems with changing the database at that point.

The new plan is roughly

master_table: user, age, sex, state, zip, income, status, etc
Basically anything that has a single data field. I can break this up into more tables, but I don't want to get carried away like you mentioned. I'm still a bit confused on how much is too much. Is it better to have all these in a single table assuming that we'll be looking for 0-4 of them in any particular query? Or should I have have 10 tables? I am worried about going back to the user id appearing mutiple times in any table getting us back to this problem as we grow. Ideally none of the tables would ever be larger than the number of user ID's which should keep everything under 1-2 million for the foreseeable future.

Then we'll have the assorted special interest tables such as sport_table, car_table, computer_table, and so forth. My working assumption is that most queries are going to pull something from the master table and then go looking for 1 or more matches in the special interest tables. 99% of those will only hit a single special interest table or at least that's what I expect to find.

example
query for males, 20's, 30k+, buying a new car, owns a Jeep
query for anyone, owns a computer, has DSL

Luckily there is a developer who is writing all this code and even better has to go through and figure out what the application is actually doing. We both sort of inherited this mess and it gets worst every time we turn our backs.

According to the books the next step is write all the fields down, look at the queries, and design a schema according to some parameters I'm still learning that will make all these problems go away.

Thanks for the input, it's nice to know I'm not preparing to walk off a cliff.

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
kashani
Advocate
Advocate


Joined: 02 Sep 2002
Posts: 2032
Location: San Francisco

PostPosted: Thu Feb 12, 2004 12:10 am    Post subject: Reply with quote

screwloose wrote:
The most efficient way of handling that situation as far as I know is with multiple tables. I do agree any field that only has one value at a time (ex. the user's age) the field should remain in the user table.

Table joins aren't an overly expensive operation as long as you use proper indexes and they can add alot of flexibility to an application.


I wasn't sure just how far to take the mutiple tables before running into problems at the other end of the scale. Most of the queries have less than 6 fields so it might not be as expensive as I've been imagining it. Also my tables are going to be fairly small, around 20MB assuming 2 million users, so fittng it all into 4GB of RAM shouldn't be that hard.

kashani
_________________
Will personally fix your server in exchange for motorcycle related shop tools in good shape.
Back to top
View user's profile Send private message
screwloose
Tux's lil' helper
Tux's lil' helper


Joined: 07 Feb 2004
Posts: 94
Location: Toon Town, Canada

PostPosted: Thu Feb 12, 2004 12:43 am    Post subject: Reply with quote

Too much is when your queries start to become slower due to the fragmented data(yes I know thats still kind of vague) An easy way to tell how efficient a particular query is running is to use the EXPLAIN command.

Code:

Normal query:

SELECT userID, interests_sportsID FROM user_sports,  sports
WHERE sport_name = 'Hockey";

Explain query:

EXPLAIN SELECT userID, interests_sportsID FROM user_sports,  sports
WHERE sport_name = 'Hockey";


I wish I could show the output but where I am right now I don't have access to a mysql database. But the output is quite different from a normal select. You might want to look up the details of this command but basically it can show you what if any indexes are being used and how many rows the database has to read from each table to generate the results of the query.

I'm sure running this on your original DB setup is quite horriffic looking. As you add indexes and move fields keep checking to see if you are lowering the number of rows read. That combined with the time your query takes to run should give you a pretty good indication of which tables could be tweaked and where indexes may be necessary. I expect your user table will usually have to read most is not all of its rows but you should see improvements.

Definately keep me posted on your progress, I'm interested to know how much of an impovement you make in the end.
_________________
If something can go wrong it probably already has. You just don't know it yet. ~Henry's Modified version of Murphy's Law
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum