April 4th, 2000, 09:12 AM
Anyone know a quick way of inserting rows from one table into another table with the same column names and types?
I have created the second table.
As I didn't know how to do this I used to export the rows to a file using redirection from the command line, then remove the headers from the exported file, then load data infile it from MySql into the new table.
This used to work ok, however I now have over 1,000,000 rows that I need to port across and I keep getting out of memory errors and nearly crash my webserver.
All I want to do is something similar to :
insert into table2 values (select * from table1 where criteria)
very rough SQL I know, but something similar to that.
I have scanned the MySQL manual but it only seems to have insert into and then filenames.
Hope someone can help, probably something dead simple that I have not picked up ;o)
April 4th, 2000, 12:07 PM
MySQL has an INSERT ... SELECT syntax. Remove VALUES and put SELECT in its place.
April 4th, 2000, 03:51 PM
Thanks for the info, I knew it would be simple.
However that command seemed to crash our server, whoops. Too many open files caused the crash ???????
I have contacted our support team to see why exporting from one table to another should cause the server to crash.
I know it is over 1,000,000 rows but surely it should be okay.
Anyone have any similar problems like this?
Once I get these 1,000,000+ rows archived I can delete them from the live DB. I guess I will have to archive daily or weekly at the least. I suppose I could set up a cron job to do it for me.
April 13th, 2000, 04:38 PM
Have you tried doing your load directly rather than as an insert? Either at the command line or through a shell script you may instruct MySQL to load a tab-delimited file.
The command line would be:
mysql -u [account name] -p -B <load.sql
and load.sql would be:
load data infile '[fully qualified file name]' into table exampletable;
April 14th, 2000, 03:02 AM
Thx for the reply.
The problem is in getting the data out of a table into another, we cannot seem to get it out of the table either into another table or into a file without bringing the web server down.
We have resorted to a MySQLDump and stripped out all of the inserts that we want and then we have edited the file and search/replaced the table name with our new table name. I have now deleted the 1,000,000+ rows but as it is a log table it has already grown to over 500,000 again. I really like logging page views to the DB because we get instant control over full page views and exact pages unlike the apache server logs that need analysis tools.
Should MySQL be able to handle tables efficiently with over 1,000,000 rows. The table in question has an index and 10 fields, we have no problem inserting into this table and no problem selecting from this table if the where clause uses the index. However if we try to query it using a where clause that is not indexed it goes away for about 10 minutes (at which point our site dies) and then returns the results.