August 7th, 2013, 09:25 AM
Compiler runs different file than the one i work on ?!
I modified a program i wrote and saved the new version to a new file (C file)
The files are:
When i compile create_list(2) he actually runs create_list(1).
What should i do ?
He didn't react to new commands i added to main, so i knew something was wrong.
August 7th, 2013, 10:42 AM
How are you invoking the compiler? From the command line? Or through an IDE?
IDEs normally provide project management in which you add files to the project and then the IDE will only use those files. Of course, there's also a way to remove files you no longer want in the project.
If you are using an IDE, did you remove the old file from the project and add in the new one?
When I want to replace a file with a new file, I'll Save As... the old file with a different name, like create_list_old.c, and give the new file the former name of the old file.
And for an ad hoc approach to version control or maintaining a back-up version, I'll create a subdirectory which I give the version name to and copy into it either the entire project or at least the files that I'm going to be changing.
Just out of curiosity, what's your native language? I started out as a foreign language major and I couldn't help noticing the use of the compiler's grammatical gender, a concept that English has lost.
August 7th, 2013, 10:51 AM
Thanks a lot.
Originally Posted by dwise1_aol
I solved it by renaming the file.
>>Oh lol, i didn't even notice the "he". My native language is russian but i try my best :D .
August 7th, 2013, 11:46 AM
Your English is quite good and has a good colloquial feel; you even throw in enough little mistakes that make it look more like a native speaker wrote it. That use of "he" was the first clue I've seen. Of course, that shouldn't not be surprising since your schools start you much earlier than ours. We don't normally start teaching foreign languages until the students are about 14. My primary foreign language is German and I've studied about a dozen other languages, though I'm only proficient in German, French, and Spanish to different degrees. I studied Russian for two years in university about 40 years ago, but never became proficient. Interesting verb system, quite different from English's, which is fundamentally German.
A co-worker's friend did some missionary work in Nizhni' Novgorod and I think elsewhere and he talked about how TV shows would be broadcast in their original language. He said that among the Russian students learning English one of the most popular shows was "Star Trek" and that they especially liked Captain Kirk because his frequent dramatic pauses give them more time to figure out what he was saying.
When I switched from foreign languages to computer science, I was immediately able to use my language skills in programming -- I'm not too good at speaking and listening, but I'm very good with grammar and the structure of the language and how it works. My first programming class was in FORTRAN and I approached it as just another language. While my fellow students would panic and try to debug their programs by making nearly random changes, I would debug mine by reading for content in order to understand what I was telling the computer to do that caused the wrong results and then realizing what I needed to tell it so that it would do what I wanted it to do.
August 7th, 2013, 12:22 PM
While other students in my highschool took their time with English, i started reading books with a dictionary and translating all the words i didn't know. It proved to be VERY useful, like now for example. Also been to the USA twice, upgraded my English ^^
What you described about your approach is EXACTLY what i do (!!): I imagine the input being passed through the loops and i try to picture what happens to it\when is the right moment to "catch" it. In 99% of cases, provided that i concentrate enough, i find my mistake...Sometimes i'm just too lazy to do it in the first place so i just say "i'll just run it and see what happens..." lol
But i actually see the commands i write in a very LITERAL way, as if somebody speaks them out loud while the input is being modified.
I think it's much clearer this way.
I also try to cram as many commands as possible into a single line. Does that also make the program run faster? what does the speed depend on?
August 14th, 2013, 01:32 PM
There is almost no direct correlation between how tightly you pack your source code and how fast the program runs. And even where your coding style does make a difference, that difference is usually quite small compared to the difference made by the algorithm that you choose.
Originally Posted by C learner
When you compile source code (and I'm talking about after the pre-processor has expanded all macros and has done its other business), the first thing that the compiler does is to scan the source code to strip out all white space and to tokenize the code. Thus cramming multiple statements onto the same line makes absolutely no difference whatsoever, except to reduce your program's readability.
Little things will contribute a little efficiency. For example, there's always processing overhead to be paid by making a function call as opposed to writing that function's code in-line. But then there's a size price to pay by repeating the same code multiple times instead of putting it into a function. In C++, you are able to declare a function to be inline, in which case the compiler will insert that code in place of the function call; obviously, this is best done with a short function. For the sake of readability and maintainability, it is better to err towards creating a function than repeating the same code over and over again. There's a quote that I can't find now that says that while you write your programs you should be thinking about the programmer who will have to maintain it, because that will usually be you. And after six months your own code may as well have been written by somebody else (Eagleson's law).
For example, in my first programming job we were doing everything in Pascal -- the DoD had mandated that all programs be written in Ada, but there was no validated Ada compiler available yet, so aerospace companies were using Pascal in the meantime, since Ada was largely based on Pascal syntax. I was the first programmer they hired who already had experience with Pascal (two years in school) and the first one with a computer science degree for that matter. Pascal and C are largely very similar in overall program structure (structured programming). When I was assigned to maintain a project, I could see how the other programmers had been programing in their old programs' style but using Pascal. The assembly programmer ran a giant loop setting and testing flags; it was so overly complex and difficult to change that it was easier and much faster to just rewrite it as a Pascal program which could be maintained. The one that came to mind was the code written by a FORTRAN programmer whose procedure was several pages long as she repeated the same 40-50 lines of code nearly 20 times, but with different input values each time. If I had to change one part of those 40-50 lines, then I'd have to make the same change 19 more times and probably miss one; that's one maintenance issue. So I moved those 40-50 lines of code into a separate procedure and replaced all that inline code with procedure calls, cleaning it up immensely and making it much easier to maintain.
One of my minor practices is to avoid making multiple calls to a function that will always return the same value. For example:
Since s is not going to change, strlen(s) will return the same value each time you call it. Every time you call it, you will incur both the overhead of a function and the execution of the function, which would normally involve traversing the string until the null-terminator is found. For long strings, this would slow the program down even more, albeit not perceptibly so. Still, I prefer to call the function once saving the result in a variable which I then use in the loop:
for (i=0; i < strlen(s); i++)
Doesn't contribute much, but it certainly feels more efficient.
len = strlen(s);
for (i=0; i < len; i++)
What really makes a difference is the algorithm you choose. Eg, what kind of a sort or search algorithm and what kind of a data structure you use. A year or so ago somebody gave an analogy of a common algorithm. Let's say that you're painting the center line on a road. You set up your paint can and you start painting. Each time you need more paint, you walk back to the paint can which is still at the starting position. At first it's fairly quick, but the farther you get the long it takes to retrace your steps. It would be more efficient and much faster to periodically move the paint can, even though that requires more work to make that move. Some algorithms require that retracing (repeated calls to strlen, for example) while some algorithms allow you to be more efficient. To get back to your original question, the details of your brush work make almost no difference, whereas the overall approach to the job makes a lot of difference.
In addition, most compilers optimize the code that they generate. You can tell the compiler to optimize either for size or for speed.
August 14th, 2013, 03:17 PM
The strlen(s) makes so much sense (!!), and i never even stopped to think about it. I'll surely use that tip. Just shows me that i have to think what every command does ALL THE WAY DOWN.
I started thinking more about algorithms just recently and realised that it's a whole different stroy.
For example, when i use compare functions to sort arrays, the compare does the dirty work, simple work, whereas the loops determine how effecient the compare job will be-- by adding break statemtns for example, that exit when there is no more need to continue comparing. That probably saves run-time.
So the way i build the outer shell, determines how fast the "workers" inside it will finish the job right ?
And as far as i understand, the algorithm is basically the instructions that bigger parts pass down to smaller parts, which are included inside them ?
August 14th, 2013, 04:23 PM
There's an entire body of theory about algorithms and complexity. As I've said, it's usually introduced as the subject of an entire an upper-division course. And it has its own calculus, its own mathematics to help you analyze algorithms. I had that class 34 years ago and am out of practice, except for having a feeling for problems that a particular method might cause. If you go on to do more theoretical work in your post-graduate studies, you will work with it a lot more. All I really remember about the math itself was that we worked with series of sums which we would manipulate into a form that we could recognize. Here's a suggestion if you haven't had series in your math classes (power series usually show up in third semester calculus). Look in the used bookstores for a reference book called either "Book of Tables" or "Mathematics Handbook". Every math, engineering, and science student and professional used to have one before scientific calculators and personal computers took over; those books contained the trig, log, exponent, and other math tables they needed to do their work. Those books also contained formulae for many things, including for summation series. You could probably find something similar on-line, but I always keep my Taschenbuch der Mathematik handy here at work (it's a translation from a Russian math handbook).
So there's a formal approach to algorithms and there's also an informal approach. The informal approach is what we use daily, wherein we think of a way to do the job and then we try to improve it. Of course, a knowledge of different techniques can make the job easier, since wouldn't need to re-invent the wheel.
For example, you'll learn the shell sort and the bubble sort. You've played with the quicksort, but, as I recall, it works best on a list that's not already sorted and worst on a list that is already sorted.
Another consideration is how complex and difficult to implement the algorithm is; the more complex, the more processing overhead. For a small amount of data, a "more efficient" algorithm might not perform as well as a much simpler "less efficient" one. That leads to such sorts as the bucket sort and the merge sort. If you have several sorted lists that you want to merge together into one sorted list, compare the first items in the lists to decide with one to write to the destination list. You can use that to finalize a bucket sort in which you divide the large amount of data into several smaller groupings ("buckets") that you can use a simple method on, then merge them together into the final list. Another use of the bucket sort could be to segregate words into buckets based on their first letter, sort the buckets, then concatenate them into the final list.
The literature (eg, textbooks, programming examples) is rich with all kinds of sorting algorithms. Wikipedia is usually a good place to start searching on-line.
So then the algorithm is basically the overall approach and strategy that you use to perform the task.
August 15th, 2013, 01:27 PM
I'm slowly catching on :)
Yeah, i remember the bubble sort. I've seen it everywhere in the book, and then when i got to page 300 or something they said: "qsort is much better than bubble sort because its algorithm is thought to be the fastest..." so why have i been playing with bubbles lol....
But it's probably best with pointers, whereas bubble does the sorting "head-on" ? which you also need sometimes...?
August 15th, 2013, 03:31 PM
Question of economy. One method works better than another but is more complex. That method enjoys a clear advantage for big jobs (lots of data items), but very little advantage for small or average jobs.
For example (referring to the first Terminator movie) if you need to destroy a T-800 Terminator, you need a big hydraulic press, but all you want to do is to crack open a walnut, then a simple hammer would do just fine; why go through the expense of a big hydraulic press for everything?
Similarly, the USA is about 3000 miles coast to coast. If you drive that distance you can make the trip in 54.5 hours at 55 miles per hour. Or you could make the trip at the more efficient speed of 65 miles per hour in just 46.15 hours, which will save you 8.4 hours of driving time, shaving nearly one full day off your trip. But if I apply that to my daily commute to work, about 15 miles, by driving at 65 mph instead of 55 I would only save about 2.5 minutes. Is 2.5 minutes really worth the extra strain on my car and on my nerves having to watch for traffic cops?
For small values, you do not get enough return on using the most powerful and the fastest methods. If one method increases linearly with the number of items, n, sorted and another increases logrithmically, I believe that you will find that for smaller values of n the logrithmic would actually take more time than the linear method would; it's only for larger values of n that the logrithmic method would be the clear choice. If you graph the curves of the complexities, you should be able to see for what values of n there is a clear difference.
Also, as I remember from what I was taught about quicksort, its efficiency is also affected by the data that it is sorting. As I recall, it performs the worst when the data is already sorted. At least when you apply the bubble sort to data that has already been sorted it figures that out in only one quick pass.
It's good to know about different methods and about how to choose between them. Plus, they give you examples to compare in algorithms class. And in most applications you're going to be working with small sets of data for which a simpler method would work better (eg, I have several places where I have a list of objects numbering no more than 10, so an inefficient linear search is much easier and faster than a binary search, besides which I don't have to keep that list sorted).
August 16th, 2013, 02:09 PM
Yeah, so it's all about maximizing the efficiency- to decide between light and heavy tools. Like waking up a big machine to get a work done, and knowing that the time you wait till it is ready will be nothing compared to the speed of its work. Whereas a smaller machine could be set up much faster but will be less efficient in conducting heavy tasks.
August 16th, 2013, 02:46 PM
Yeah. There are always trade-offs that you should consider.
Originally Posted by Douglas Adams