October 24th, 2013, 08:16 AM
Faster file read and scalability
Let me preface that I searched the forums but didn't find exactly what I was looking for. If this has been answered already, sorry to bring it up again (I am sure my search skills are lacking).
I need to read in a tsv file and I am using 5.3. I know about str_getcsv for each individual line.
I am curious as to the method that would be fastest and would also allow for the file to grow in size while maintaining speed.
I am sure there are other ways but the two I am learning towards:
1. fopen with !feof, reading each line and processing via str_getcsv from there
2. file reading everything into an array, looping through the array, then using str_getcsv on each loop
Once again, neither of these might be the best method, please let me know. I think the arrays (number 2) could potentially get large, slowing the process down. I just don't know how fast the two versions are compared to each other.
Thank you for your time and consideration,
October 24th, 2013, 08:46 AM
#1 is best since it doesn't load the entire file into memory. If this file is arbitrarily large, that's the way to do it.
Comments on this post
HEY! YOU! Read the New User Guide and Forum Rules
"They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin
"The greatest tragedy of this changing society is that people who never knew what it was like before will simply assume that this is the way things are supposed to be." -2600 Magazine, Fall 2002
Think we're being rude? Maybe you asked a bad question
or you're a Help Vampire.
Trying to argue intelligently? Please read this.
October 24th, 2013, 01:15 PM
You can just use fgetcsv to handle the reading and parsing in one step rather than separate them out.
Originally Posted by cranium
Recycle your old CD's, don't just trash them
If I helped you out, show some love with some reputation, or tip with Bitcoins to 1N645HfYf63UbcvxajLKiSKpYHAq2Zxud