Page 2 of 2 First 12
  • Jump to page:
    #16
  1. Contributing User
    Devshed Supreme Being (6500+ posts)

    Join Date
    Jan 2003
    Location
    USA
    Posts
    7,210
    Rep Power
    2222
    Read the standard.

    As I recall, float is 16-bit while double is 32-bit, but don't take my word for it but rather read the standard. That is why there is a standard!

    Then figure out how much difference a 64-bit floating-point representation would make over a 32-bit one.
  2. #17
  3. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Posts
    91
    Rep Power
    2
    Originally Posted by dwise1_aol
    Read the standard.

    As I recall, float is 16-bit while double is 32-bit, but don't take my word for it but rather read the standard. That is why there is a standard!

    Then figure out how much difference a 64-bit floating-point representation would make over a 32-bit one.
    it still doesn't make much difference...so its not a problem...

    And if I'm not wrong, a double would just make the micro-controller slower...
  4. #18
  5. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by zedeneye1
    And if I'm not wrong, a double would just make the micro-controller slower...
    Ah... that's a different consideration; this is the first time that you have mentioned the target platform. Indeed, this would be a consideration. Especially if there is no hardware floating point. That said, if the calculation is not performed often and has modest real-time constraints, it may not be an issue.
  6. #19
  7. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Posts
    91
    Rep Power
    2
    Originally Posted by clifford
    Ah... that's a different consideration; this is the first time that you have mentioned the target platform. Indeed, this would be a consideration. Especially if there is no hardware floating point. That said, if the calculation is not performed often and has modest real-time constraints, it may not be an issue.
    Well, I learned C cuz I wanted to make some small autonomous robots and someone suggested me that you should learn C to start. I was thinking of how a quadcopter would be programmed. And one of the things was a guidance system. So thats why I wanted to make a program to calculate headings...

    And you are right, it doesn't have to be calculated very often, so a double would make sense, except plus/minus one degree in heading makes little difference.

    Now I'm going to start microcontroller programming and keep practicing C as well...
  8. #20
  9. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by zedeneye1
    And you are right, it doesn't have to be calculated very often, so a double would make sense, except plus/minus one degree in heading makes little difference.
    For performance on processors without an FPU you should consider using fixed point arithmetic. A good fixed-point library that uses the CORDIC algorithm for fast trigonometry can be found here. It is a C++ library, which may be a problem if your device does not support a C++ compiler, but through extensive function and operator overloading you can use the "fixed" type it defines almost interchangeably with the built in types. The library is not huge and will work on all but the most resource constrained parts, and using C++ compilation itself adds little or no overhead over C compilation.

    One problem with fixed-point however is that for expressions using values near the lower range limit, precision can be lost leading to large errors. For example see my question on StackOverflow here. Note that in that question I have posted an improved sqrt() function to replace the one in the library mentioned above, however even that did not ultimately solve my problem and I ended up using std::sqrt() for that part of the algorithm but fixed point elsewhere.

    Also note that there is a bug in the log_2_power_n_reversed table as described here.

    I have successfully used this library in a number of real-world commercial applications. On an ARM7 processor it yields about a 5x performance improvement over software floating point - making it similar in performance to an ARM7 with a VFP (of which there are precious few).

    More modern ARM Cortex-M4 parts often incorporate a hardware floating point unit, but it is only single precision so using double might significantly affect performance. Even if a part has an FPU with double precision support, you have to move twice as much data in and out of memory, and unless the data path is 64 bit, which would be unusual, it will take twice as long, so algorithms that process a lot of data in matrices for example may be severely hampered by the use of double precision.
    Last edited by clifford; January 3rd, 2013 at 02:54 PM.
  10. #21
  11. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Posts
    91
    Rep Power
    2
    Originally Posted by clifford
    For performance on processors without an FPU you should consider using fixed point arithmetic. A good fixed-point library that uses the CORDIC algorithm for fast trigonometry can be found here. It is a C++ library, which may be a problem if your device does not support a C++ compiler, but through extensive function and operator overloading you can use the "fixed" type it defines almost interchangeably with the built in types. The library is not huge and will work on all but the most resource constrained parts, and using C++ compilation itself adds little or no overhead over C compilation.

    One problem with fixed-point however is that for expressions using values near the lower range limit, precision can be lost leading to large errors. For example see my question on StackOverflow here. Note that in that question I have posted an improved sqrt() function to replace the one in the library mentioned above, however even that did not ultimately solve my problem and I ended up using std::sqrt() for that part of the algorithm but fixed point elsewhere.

    Also note that there is a bug in the log_2_power_n_reversed table as described here.

    I have successfully used this library in a number of real-world commercial applications. On an ARM7 processor it yields about a 5x performance improvement over software floating point - making it similar in performance to an ARM7 with a VFP (of which there are precious few).

    More modern ARM Cortex-M4 parts often incorporate a hardware floating point unit, but it is only single precision so using double might significantly affect performance. Even if a part has an FPU with double precision support, you have to move twice as much data in and out of memory, and unless the data path is 64 bit, which would be unusual, it will take twice as long, so algorithms that process a lot of data in matrices for example may be severely hampered by the use of double precision.
    I'm very sorry, but most of what you said, I don't know of. As in I'm so new to C and programming and microcontrollers etc, that I can't understand most of what you said. I'm not sure if I already mentioned, but I have just started learning C as a hobby...So at this point I can't really understand all that...

    So I will just ask some simple questions:

    -the atmega32 makes 16MIPS. how much exactly is that? I mean its 16million instructions per second, but in terms of running a loop code, how many times will it be able to go through the loop in a second?

    -what exactly is an instruction? When it(datasheet of atmega32) says it can do 16MIPS, what kind of instruction is it talking about?

    -Is there some software to simulate a microcontroller and see how many times a loop code it would execute in a second?

    -so there is no (easy) way to have 64-bit data on a processor of lower bits like 32, 16, 8 etc? and you will always have to write complex code to compensate for it...

    thanks.
  12. #22
  13. Contributed User
    Devshed Specialist (4000 - 4499 posts)

    Join Date
    Jun 2005
    Posts
    4,403
    Rep Power
    1871
    How about just making it work, before you start hacking away at all sorts of optimisations to make it quicker.

    For instance, let's start with
    - where you're getting the data from, and at what rate?
    - where the data is going to, and at what rate?

    Now, if the source data is 1Hz, and the destination rate is 0.5Hz (think - how often is a person going to read this, before it changes), and your loop using just a floating point library can run at say 2Hz (in fact, anything quicker than 1Hz), what would be the point of doing any optimisation? It's not like you have anything better to do.

    Sure, if you were a professional programmer designing a consumer product, then less time in the loop would make for better battery life. But you're not there yet.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper
  14. #23
  15. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Posts
    91
    Rep Power
    2
    Originally Posted by salem
    How about just making it work, before you start hacking away at all sorts of optimisations to make it quicker.

    For instance, let's start with
    - where you're getting the data from, and at what rate?
    - where the data is going to, and at what rate?

    Now, if the source data is 1Hz, and the destination rate is 0.5Hz (think - how often is a person going to read this, before it changes), and your loop using just a floating point library can run at say 2Hz (in fact, anything quicker than 1Hz), what would be the point of doing any optimisation? It's not like you have anything better to do.

    Sure, if you were a professional programmer designing a consumer product, then less time in the loop would make for better battery life. But you're not there yet.
    I have no idea at what rate different GPS chips give data, but I did look at some datasheets of cheap gps chips and many were actually at 1hz ( 1 measurement every second ).

    as far as working is concerned, I do have the program working...but I'm a beginner programmer and don't even fully know everything in C, I still have to properly learn strings, file input/output, GUI etc etc etc...so will have to learn those things first, before I can do anything significant...
  16. #24
  17. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by zedeneye1
    -the atmega32 makes 16MIPS. how much exactly is that? I mean its 16million instructions per second, but in terms of running a loop code, how many times will it be able to go through the loop in a second?
    That will depend on the number of instructions in the loop, and the overhead of the loop code itself. You have little control over that since the instructions are generated by the compiler, there are techniques for minimising the loop overhead, such as loop-unrolling, but a decent optimising compiler will do that in any case when optimisation is enabled and the overhead is significant compared to the loop content.

    Most embedded applications are event driven - they react to events such as interrupts an in this case the arrival of a message from the GPS module. This is typically low speed serial data with a position fix at 1Hz. So rather than the number of iterations possible, you have to ask, can I perform all necessary calculations and resulting actions before the next fix data arrives? Most embedded systems spend most of their time waiting on events rather than calculating or processing.


    Originally Posted by zedeneye1
    -what exactly is an instruction? When it(datasheet of atmega32) says it can do 16MIPS, what kind of instruction is it talking about?
    It refers to a single machine level instruction. Your compiler translates your human readable high-level C language code into much more primitive machine code. Machine code instructions perform very simple operations such as read a memory location into a register, write a register to a memory location, branch to an address if a register is zero, add two values, subtract two values etc. The AVR instruction set is summarised here.


    Originally Posted by zedeneye1
    -Is there some software to simulate a microcontroller and see how many times a loop code it would execute in a second?
    Yes, you need a cycle-accurate instruction set simulator. What tool chain are you using? You may already have one. Atmel AVR Studio for example includes a simulator, you run the code in simulation within the debugger by selecting Debug Platform->AVR Simulator.

    Originally Posted by zedeneye1
    -so there is no (easy) way to have 64-bit data on a processor of lower bits like 32, 16, 8 etc? and you will always have to write complex code to compensate for it...
    Processing 64 bit data on an 8 bit machine does indeed require more instructions than handling smaller types for example, but the compiler generates that code for you, at the C code level it is just a case of using a appropriate data type but accepting that the code may run significantly slower. On an AVR you generally want to aim to use the smallest data type that meets the requirements for range and precision, so you were probably right all along perhaps to use float over double in this case.

    By the way C++ compilation is well supported on AVR, I suggest you consider using it over C, even if you don't do OOP. C++ is a bigger toolbag, and many of those tools come an zero cost compared to C code.
  18. #25
  19. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2012
    Posts
    91
    Rep Power
    2
    Originally Posted by clifford
    That will depend on the number of instructions in the loop, and the overhead of the loop code itself. You have little control over that since the instructions are generated by the compiler, there are techniques for minimising the loop overhead, such as loop-unrolling, but a decent optimising compiler will do that in any case when optimisation is enabled and the overhead is significant compared to the loop content.

    Most embedded applications are event driven - they react to events such as interrupts an in this case the arrival of a message from the GPS module. This is typically low speed serial data with a position fix at 1Hz. So rather than the number of iterations possible, you have to ask, can I perform all necessary calculations and resulting actions before the next fix data arrives? Most embedded systems spend most of their time waiting on events rather than calculating or processing.


    It refers to a single machine level instruction. Your compiler translates your human readable high-level C language code into much more primitive machine code. Machine code instructions perform very simple operations such as read a memory location into a register, write a register to a memory location, branch to an address if a register is zero, add two values, subtract two values etc. The AVR instruction set is summarised here.


    Yes, you need a cycle-accurate instruction set simulator. What tool chain are you using? You may already have one. Atmel AVR Studio for example includes a simulator, you run the code in simulation within the debugger by selecting Debug Platform->AVR Simulator.

    Processing 64 bit data on an 8 bit machine does indeed require more instructions than handling smaller types for example, but the compiler generates that code for you, at the C code level it is just a case of using a appropriate data type but accepting that the code may run significantly slower. On an AVR you generally want to aim to use the smallest data type that meets the requirements for range and precision, so you were probably right all along perhaps to use float over double in this case.

    By the way C++ compilation is well supported on AVR, I suggest you consider using it over C, even if you don't do OOP. C++ is a bigger toolbag, and many of those tools come an zero cost compared to C code.
    so let me get this straight :

    C>>assembly language>>machine code

    is that the correct sequence as in "level" of language? I mean first is machine code which the actual processor understands. Then they made assembly language to make it easier and then C to make even assembly language easier...? I mean that C is higher level than assembly which is higher level than machine, and machine code is the "0 level" or "ground floor of programming"?

    also, you said in C you can not tell the number of instructions, so my question now is, can you tell the number of instructions if you have assembly or machine code? I mean the ".o" file which codeblocks generates, can u tell from that how many instructions there are for a piece of code?

    should I learn assembly language to better understand how programming really works? can you suggest me some good book(s) on everything...? I like books that go into all the tiny details..

    And I was going to learn C++ when completely done with C and was thinking maybe learning some graphics like opengl etc...? what do you suggest?
  20. #26
  21. Contributing User
    Devshed Supreme Being (6500+ posts)

    Join Date
    Jan 2003
    Location
    USA
    Posts
    7,210
    Rep Power
    2222
    Originally Posted by zedeneye1
    so let me get this straight :

    C>>assembly language>>machine code

    is that the correct sequence as in "level" of language?
    Yes, that is pretty much correct. Except the leap from C to assembly is very much greater than from assembly to machine code.

    Originally Posted by zedeneye1
    I mean first is machine code which the actual processor understands. Then they made assembly language to make it easier and then C to make even assembly language easier...? I mean that C is higher level than assembly which is higher level than machine, and machine code is the "0 level" or "ground floor of programming"?
    Machine code is indeed where the rubber meets the road, where you're getting down to the metal. But the rest isn't quite right.

    Describing the history rather simplistically, at first you had to program a computer by literally writing the ones and zeros into its memory; eg, on our school's PDP-9, you could load your program through a paper tape reader, but first somebody had to "finger-bone" the loader program into its ferrite-core memory by setting the data switches and loading that setting into memory. That machine code consisted of numeric values that represented the instruction and its operands; each such grouping of words was an instruction and the program consisted of a sequence of instructions that would perform a task. That entire process was very time-consuming and laborious.

    The first translation programs were the assemblers, in which the programmer could write human-readable instructions that consisted of a mnemonic, a character string, usually three letters long, that reprsented one specific instruction, and symbolic representations for the operands. These symbolic representations could be characters representing a register (eg, A, B), or a symbolic name for a memory location, or a literal value. In addition, you could assign symbolic names to memory locations, leaving it up to the assembler exactly where those locations were rather than having to figure all that out by hand. This made writing a program much easier to do and to read and debug, but you were still writing out each individual instruction.

    However, since each computer model had its own individual instruction set, which includes the actual numeric value representing a particular instruction (eg, ADD would translate to different numbers on different computers) as well as its own CPU architecture (ie, different sets of registers that could be different sizes -- the 8-bit byte was not a standard word length), your assembly program would only work on the computer you wrote it for. That meant that if you wanted to run the same program on a different computer, then you had to rewrite the entire program all over again from scratch. Outside of processor families designed to be compatible with each other (eg, the Intel 80x86 family), assembly programs have zero portability.

    That problem was solved by compilers, translation programs that would translate a higher-level language that is not associated with any particular CPU instruction set into the target machine's machine code. This way, the same C program could be compiled for an Intel or a Motorola or a VAX or an IBM (non-PC) or whatever computer system you may be using, and it would run and produce the same output, barring any implementation dependencies.

    So, while it was the case that the main reason for assemblers was to make it easier, the main reason for higher level languages like C was much more than making it easier. What you said wasn't really wrong, but rather it left far too much out.

    Originally Posted by zedeneye1
    also, you said in C you can not tell the number of instructions, so my question now is, can you tell the number of instructions if you have assembly or machine code? I mean the ".o" file which codeblocks generates, can u tell from that how many instructions there are for a piece of code?
    So then, while there is a one-to-one correspondence between a line of code in assembly (a single instruction) and a machine code instructions, there is no such equivalency between C and machine code (nor between C and assembly). A single C statement could generate only a few instructions or dozens of instructions. A C control structure (eg, for, while, switch) would generate an entire structure of assembly code into which the other lines of code would generate their corresponding instructions. A simple function call would generate supporting thunk code that would set everything up for the actual call, such as setting up the stack with the return address and setting up the function's local variables on the stack, as well as evaluating the arguments that are being passed; how many instructions that would generate depends on each individual function call. Plus, the number of instructions generated also depends on the CPU's instruction set. Used to be that not all CPUs had a divide instruction, so a division routine had to be added; same thing with whether there's a FPU (floating-point unit) available. Even addressing memory could be an issue; I used to write assembly for a 4048-family processor which could only address memory by the code explicitly loading one of two memory address registers, so if you had three or more addresses to work with, you had to add code that would explicitly swap those addresses in and out as needed which would generate even more code than with a more powerful processor (actually, there was no C compiler for the 8048; compiler support started with the 8051).

    Does that answer your question?

    Now, if you want to see what assembly code that C code generates, tell your compiler to generate an assembly listing. I don't know how to do it in codeblocks, but in gcc (which I believe codeblocks uses) the command switch is -S, which creates a .s text file. The C statement should be a comment followed by its corresponding assembly code.

    Originally Posted by zedeneye1
    should I learn assembly language to better understand how programming really works? can you suggest me some good book(s) on everything...? I like books that go into all the tiny details..

    And I was going to learn C++ when completely done with C and was thinking maybe learning some graphics like opengl etc...? what do you suggest?
    Of course, the more you learn the better, but assembly is getting used less and less as C and C++ have become increasingly prevalent in embedded programming. There are just a few places where a smattering of assembly is used, such as in interrupt service routines. I would think that you should get a solid understanding of C and C++ and then go back and learn assembly.

    Your project does seem to call for learning how to interface the processor and how to handle interrupts. I learned by reading SAMS' 8080A Bug Book, which completely covered the microprocessor and its operation, but then that was the late 70's, even amazon.com has zero copies in stock, and I don't know what equivalent book is out there. Certainly there's a processor board that you have in mind for your project, such that there may be some books or other materials published about working with it. My big problem with recommending books is that all the books I learned from are out of print and I don't know what's currently out there.

    If you're going to be talking with a GPS receiver, then you'll need to set up and work with a serial port and that will normally require working with interrupts, unless your board handles that for you. You'll be facing enough challenges in your project to have to worry about learning C as well. Get a solid handle on C and C++ first.
  22. #27
  23. Contributing User

    Join Date
    Aug 2003
    Location
    UK
    Posts
    5,116
    Rep Power
    1803
    Originally Posted by zedeneye1
    so let me get this straight :
    C>>assembly language>>machine code
    Assembly language is simply a human readable representation of machine code, where binary bit fields are replaced with mnemonics and arguments - one assembler instruction maps directly to one machine code instruction (opcode). I once had to program a processor in true machine language via a hex keypad, but only as an academic exercise. I also once used a PDP-8 Minicomputer that had to be bootstrapped by entering machine code instructions via binary switches and then flipping the "run" switch.

    Originally Posted by zedeneye1
    is that the correct sequence as in "level" of language? I mean first is machine code which the actual processor understands. Then they made assembly language to make it easier and then C to make even assembly language easier...? I mean that C is higher level than assembly which is higher level than machine, and machine code is the "0 level" or "ground floor of programming"?
    Wikipedia's article on the subject does a better job of explaining it that I will.


    Originally Posted by zedeneye1
    also, you said in C you can not tell the number of instructions, so my question now is, can you tell the number of instructions if you have assembly or machine code? I mean the ".o" file which codeblocks generates, can u tell from that how many instructions there are for a piece of code?
    Object files contain a good deal of meta-data and debug information so that estimation would be misleading. Your compiler almost certainly has an option to generate assembler listing files so that you can see the code it genetrates, also within the debugger or simulator you can perform a "disassembly" and view and step the code at the assembler/instruction level. Usually the assembler is expanded inline with teh source that generated it, however to be useful you would not normally switch on optimisation since that can reorder and remove code in ways that are hard to fathom.

    Originally Posted by zedeneye1
    should I learn assembly language to better understand how programming really works?
    Perhaps; that is how I was trained, but that was a different time when compilers were rare beasts. I think you'd have to be very motivated to bother. I always tell people that for the most part assembler programming per se is unnecessary. Every processor architecture has a distinct instruction set, and the effort of learning one has a tendency to lock you into that architecture when there are often better solutions. It is well to have a working knowledge of assembler in a "read-only" sense as it can help advanced debugging, but as far as writing it, treat your compiler as an expert system that understands how to best use the instruction set. There is an article on this subject at Embedded.com
    Originally Posted by zedeneye1
    can you suggest me some good book(s) on everything...? I like books that go into all the tiny details..
    For AVR I suggest that the best source of information is Atmel's own part reference manuals, data sheets and programmers reference. As I said, each architecture has a different instruction set, you'd need a book on the architecture of interest, and modern 8 bit microelectronics don't attract too many authors these days.
    AVR Instruction Set Manual
    ATMega32 Reference manual
    Other documents and app notes

    Originally Posted by zedeneye1
    And I was going to learn C++ when completely done with C and was thinking maybe learning some graphics like opengl etc...? what do you suggest?
    People learning C++ after C often have trouble adopting the OO paradigm, but for many embedded systems especially on small devices as yours there are good arguments for not using C++ or at least not using all its OOP features. I would not however recommend attempting to use C at all on a desktop system, not least because most modern GUI frameworks are C++, and C++'s interoperability with C means that you can use all C libraries plus those for C++ specifically.

    Again there are some good articles on embedded.com on C++ vs C. I have listed a number of them in my answer to this question on StackOverflow
    Last edited by clifford; January 5th, 2013 at 04:43 AM.
  24. #28
  25. Contributed User
    Devshed Specialist (4000 - 4499 posts)

    Join Date
    Jun 2005
    Posts
    4,403
    Rep Power
    1871
    Did this thread have a topic, or is it now just zedeneye1's entire education thread?

    Comments on this post

    • clifford agrees : But he's enthusiastic and keen and asks interesting questions which is somewhat refreshing around here. ;-)
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper
Page 2 of 2 First 12
  • Jump to page:

IMN logo majestic logo threadwatch logo seochat tools logo