February 26th, 2003, 12:48 PM
Jason, this is the statement that made me think that you believed that code had to be in include files to make things flow smoothly.
In answer to most of your other statements, I think we are just coming from entirely different environments. The C code I write gets built for Win32, Win64, Linux, UnixWare, Netware, Tru64 Unix, and HP-UX. We have not found a language other than C that can do this well. I have never even seen Delphi and only briefly used TPW a long time ago. I suspect that many of the things you talk about the compiler doing for you are things that most of us C developers wouldn't want to trust to the compiler. The bottom line is that I believe the Pascal model that you use may not be suitable for anything other than small to medium projects. Of course, keep in mind that this is coming from someone who will most likely never use Pascal or Delphi and therefore doesn't really know what he is talking about in regard to them. For me, and most of the programmers that I work with, C isn't as broken or as "flawed" as you seem to think. To me, having to put in a #include to be able to use code in another file is just another way of telling the compiler to "use" that module.
February 27th, 2003, 07:35 AM
dwise1_aol, thank you for your input, but I think you missed the point I was trying to make. I am fully aware of 'guard defines'. The fact you must do this in C/C++ is precisely the point I am trying to make. Pascal's units take care of this (among many other things). The only way to make modules work properly in C/C++ is for the programmer to make sure that he 'codes' such things (as guard defines) outside of his programming code. In pascal, the only code I write is actual program code. The rest is up to the compiler. The compiler handles what it should handle, so the programmer does not have to worry about it.
The problem is that includes are not a proper solution to the modularity problem. Standard pascal, c and c++ are all plagued with this. Borlan's Turbo Pascal (and I believe Delphi) have solved this problem with units.
February 27th, 2003, 07:56 AM
Re: My Own Two Cents
dwise1_aolp: To respond to your second post:
I agree 100% with your conclusion (i.e. C is the closest thing we have to a universal computer language.). It is the reason I am using it, as I stated in a prior post. But, I just wanted to point out a few things that you said:
1. Yes, standard pascal is a horrible language, and since every company had to add its own extensions, it has made it unportable. I would like someone to comment on the number of people who use different pascal compilers, if you know any numbers. I was under the impression that Borland's Delphi pretty much has the vast majority? Even though, I understand that this still does not make it portable from machine to machine.
2. That was a very interesting comment in regards to p-code and Java. From the beginning, the fact that a language was designed around pseudo-code was an amazing idea - there's no need to convert it into pseudo code so a C programmer can understand it. I still think it is really too bad the the orignal standard of Pascal was not made with the extensions that Borland added to it, because it really is an amazing language. Also, this would mean that it is portable (as it is already industrial strength), which is pascal's major flaw.
3. All the benefits of C are included in Turbo Pascal (i.e. "including assembly instructions, bit-fiddling, and accessing specific memory locations and hardware registers", perhaps not bit-fiddling [I am not sure], but everything else I have personally used). This goes to show that you can have elegance and get the job done at the same time. I do not believe that this is a legitimate excuse for why you can see 'C's butt crack' as it is getting the job done.
4. "Because of C's standard library, standard C is already industrial strength." Agreed.
February 27th, 2003, 08:24 AM
3dfxMM: Aha, I see where you got that from. When I said 'code', I meant code that is not programming code (i.e. include directives, etc.) That statement was trying to explain that in C, you must type all kinds of extra 'non-programming code' to your program to get the compiler to build modules properly.
Also, I was under the impression that you were a C and Delphi programmer, so I could not understand why you didn't know the luxuries of using units in Pascal, which I am almost certain still exist in Borland's Delphi. I thought I had seen you post in another thread that you use Delphi... my mistake.
I agree that C is portable to many different machines, such as the ones you have mentioned. I do believe that this makes the language superior if you are looking for that portability. I know I am, so I use C.
However, this does not mean the language itself is the best way to get things done. Delphi is used to make large scale projects, as well. In fact, the very things I am attempting to explain about units make this that much more easier. If you got into OOP with Turbo Pascal or Delphi, then you would see again that its implementation is far superior to that of C.
You state: "I suspect that many of the things you talk about the compiler doing for you are things that most of us C developers wouldn't want to trust to the compiler." This is not true. Allow me another chance to explain:
One example: 'guard defines'. Why do you need to put this at the top of every header file? Because the include directive, and the compiler does not handle this for you. Units do.
You stated: "Of course, keep in mind that this is coming from someone who will most likely never use Pascal or Delphi and therefore doesn't really know what he is talking about in regard to them. " I think this is our problem. I was under the impression that you had used Turbo Pascal, C, C++ and Delphi extensively, so I thought you would know what I was talking about by briefly mentioning them.
Here's an analogy to why units are better than includes:
imagine if you had to specify which source files got recompiled each time you compile your program. Then imagine a new compiler that knows to only recompile the ones that are changed since the last compile. This is stuff that should be done by the compiler. There is no question that any programmer would give up this control and let the compiler handle it, since it is something that he needs not control. This is what units do to replace includes.
Another thought: Think from Borland's perspecitve. Look at all the things I have mentioned that Borland improved upon standard pascal in their design of Turbo Pascal. 99% of everything C can do, Turbo Pascal can do, including null terminated strings - it has it all, so they were smart. Now, in their v4.0 of Turbo Pascal, they saw a way to improve including files. Their solution was units. If you look up in a help file in regards to units, it specifically says that this was introduced to do away with includes. The point is that they did something right as everyone stopped using includes (which it has always been backwards compatible with) and have never looked back.
you state: "To me, having to put in a #include to be able to use code in another file is just another way of telling the compiler to "use" that module." Yes, in pascal you just put "uses unitname" and it 'uses' that module. But this is all I do. In C, this is not all you do, is it? That's my point.
In conclusion, perhaps 'flawed' is a bad word. C is not flawed, since it works. But it is far from perfection, and Borland's implementation of Pascal is far closer to perfection, just as IDEs are far closer to a perfect developement tool than using command line compilers with make files. In the end, it is just annoying to me to have to do all this extra work to make another header file when I know about the luxuries of avoiding all of this with Pascal.
February 27th, 2003, 10:36 AM
>>imagine if you had to specify which source files got recompiled each time you compile your program. Then imagine a new compiler that knows to only recompile the ones that are changed since the last compile. This is stuff that should be done by the compiler. There is no question that any programmer would give up this control and let the compiler handle it, since it is something that he needs not control. This is what units do to replace includes.
I don't see an advantage that Delphi/Turbo Pascal has here. C compilers ALWAYS did this since day #1 (back from the 1970s). In fact, it was Turbo Pascal playing catch up, cuz it didn't do this before version 4 came out. #includes have nothing to do with whether the compiler decides what should be recompiled or not. A C compiler always checks to see if there is an object file for every source file and if the date of the object file is greater than the source file. If either of these two conditions is not met, then it recompiles that source file. This has been the behavior of every C compiler since time immemorial. A Delphi or a Turbo Pascal compiler also checks to see if there is a DCU file (or a .tpu) file for a every .pas file, and checks if the date of the dcu file is greater than the .pas file or not. That's how it decides whether to recompile a source file or not.
>>you state: "To me, having to put in a #include to be able to use code in another file is just another way of telling the compiler to "use" that module." Yes, in pascal you just put "uses unitname" and it 'uses' that module. But this is all I do. In C, this is not all you do, is it? That's my point.
I always looked at it as a matter of good coding principle to put a #ifndef around my #defines (guard defines) -- it is not really necessary. Bear in mind that the function prototypes in an #include file are there to tell the C compiler how to properly typecast variables, so that they match the actual functions. A Delphi/Turbo Pascal compiler looks at the .dcu (or .tpu) file to do the same thing, so I have no idea what point you're trying to make with the advantages of uses over #include aside from syntatical differences.
In fact, until I started to work with C++, none of my code had #ifndef in my .h files (and I'd written a few fairly large programs, mind you). If you have a copy of the Lions Book, you'll notice that NONE of the .h files have guard defines in them -- and this is the source code of the UNIX operating system, which is a fairly large project.
About keeping the implementation vs. the interface sections in one or two files, there are advantages and disadvantages to both. The Turbo Pascal approach keeps it to one file, which makes the # of files in the project lesser. However, if you decide to distribute your code in object code form (to keep your source code secret), there's no way for quick way for another programmer to check out what parameter types a function accepts or returns, because both the implementation and the interface sections are bundled into the .dcu file. With a C program, the compiled code is in the object file, but the interface section is in the .h file in plain ASCII text. Also, some people prefer to see the implementation vs. the interface section in separate files, so the advantages/disadvantages are all in the eye of the user. If anything, the interface section of Turbo Pascal/Delphi is more wordy than it needs to be (you have to ensure that each argument of a function in the interface section is named, and has the same name in the implementation section. If the names don't match, the compiler actually treats this as an error). Whether this is an advantage or a disadvantage depends on the user as well . I personally don't care either way.
>> Delphi is used to make large scale projects, as well.
This is one statement that I second, as I've personally done a few fairly large projects in Delphi.
>> 99% of everything C can do, Turbo Pascal can do, including null terminated strings - it has it all, so they were smart.
The reason they added support for NULL terminated strings was so that Turbo Pascal programs could interface with modules written in C. There was however one entire class of programs that I could not write in Turbo Pascal, that I could do in Turbo-C. I could not write any decent TSR programs. Turbo Pascal did have some features to write interrupt handlers, but I think Turbo C had some advantages. For instance, there was no way to write an INT-13 (disk I/O) interrupt handler in pure turbo-pascal because of the strict type checking. You had to have some assembly language code to return status in the FLAGS register and I (and most of my friends) didn't have a compatible assembler. With Turbo-C, you could set the FLAGS register without resorting to assembly code, by diddling the stack in your handler (it's a bit of a hack, but quite a few TSR books used this technique and it was pretty widely used). This was one of the major reasons I switched over to using Turbo-C for a while.
BTW, just so you should know, the Delphi compiler uses the Borland C++ compiler as its backend.
February 27th, 2003, 10:56 AM
BTW, if you want to know where my preferences lie, I prefer to use the best tool for the job. I use Delphi most of the time at work, because there are other guys who program in it. I use C++ Builder (which incidentally can compile Delphi code as well) with libraries and routines that are written in C or C++, since I can't bother to translate everything to Delphi. If I need my program to be exceptionally small or am using some code that uses MFC, then I use Visual C++.
Back in the DOS days, I had a very good text mode windowing library written in C (thank you Al Stevens), so I used that for text programs. However, I had a nice graphics mode windowing library for Turbo Pascal, which I used when I needed graphical windows. When TP5.5 (or was it 6?) came out with Borland's Window functions for text mode, I still preferred using Al Stevens' code, because it was smaller and easier to understand. For TSR programs, I mostly used Turbo-C because of the better support + it compiled a variety of memory models (Turbo Pascal 5.0 only compiled to large memory model IIRC). Besides, I could also use Al's windowing library with my TSRs and it didn't use any overlay files or anything .
So I guess I just use whatever is convenient for me
Last edited by Scorpions4ever; February 27th, 2003 at 11:04 AM.
February 27th, 2003, 11:07 AM
Please ensure you read my posts carefully before replying. I stated the 're-compiling of old files' was an analogy to the point I was trying to make. Since no one here has used Turbo Pascal/Delphi units before (which I had thought was not the case), it is more difficult for me to explain that the benefits of units are actually fixing problems with includes. They were developed for that sole purpose. They do not remove control of the developer that he wishes he had. It does the stuff that you should not have to worry yourself with.
Also, Turbo Pascal is not playing catch up, since I was not saying that they introduced this in version 4. They introduced units in version 4 which is heads and shoulders above any language which relies on include files for creating modular code. I am fully aware that includes have nothing to do with recompiling code that hasn't been modified, and I know exactly how a compiler determines whether or not code needs to be recompiled. It is elementary. Again, please read my posts more carefully. Otherwise this can very quickly turn this discussion into an argument, which it turn solves nothing.
I have explained the benefits many times in this thread. They are as simple as the analogy I created regarding IDEs. They allow you to code without worrying about things that the compiler should handle. I never need to worry about missing a guard define in pascal, yet I have to in C. I purposely have to try and code properly so that it all works smoothly. I miss the days of programming Turbo Pascal in which I did not have to.
It is amazing that the source code to unix has no guard defines. I would have thought it would be hard to find any professional C/C++ programmer today that does not use them. Regardless of this fact, it does not mean that they are 'ok', since you must take care one way or another (whether you use them or not) to ensure proper compiliation. My point is: why shouldn't the compiler worry about such things?
Please keep in mind that this is not a language 'war' - I believe the C standard (as well as the pascal standard) could be vastly improved with addition of such things, just as borland has done with their own version of pascal.
Quite correct. I would have believed that one file instead of two outweighs this benefit, since giving people .h files with the executable is just a simple form of documentation, which is still possible with the one file method. But, I understand that it is a very quick way to distribute documentation, as well as a good way to ensure that it is up-to-date, which is important.
I would care, personally, since it ensures less errors. This is why it is nice the languages now perform type checking. I would have thought everyone would agree to this... isn't this sort of the same type of 'improvement' that type-checking was? I believe it is.
I am glad that we have a Delphi'er on board! And I am glad you are posting your thoughts. Thanks.
Regarding the TSR's, I did the same sort of thing. Even though the language supported interrupt calls and everything, I found it much easier to simply jump right into the assembly code. It was actually shorter if you just intended to call a single interrupt, from what I remember. I always had problems with writing to disk from TSRs, but obviously not for the reasons you stated, as I was writing in assembly from within Turbo Pascal.
That doesn't surprise me at all, but is interesting to know.
February 27th, 2003, 11:13 AM
Good comments. Definitely the right tool for the job is what you should use. Currently, I am worrying about portability, therefore I am using C++ (actually C, but I sometimes get into C++ because it allows some useful things that C doesn't).
But, I was trying to say is that I believe the languages (all of them) can be improved. I know they can, because I have seen it done. To change the standard itself is a whole new story, though.