| 1. |
Solve : Automaticly generated code? |
|
Answer» Hi! Modern High-Level languages seems to have predefined functions, procedures or macroes that actually are so flexible that they can fix almost anything you want.This makes no sense, whatsoever. Most High Level language functionality is not done via functions,procedures, or macros at a lower level, but compiled directly. For example, loops are done via Assembly Jumps. macros expand to other code. (By the way, Macros are HEAVILY used in any real assembly program beyond the trivial, so that argument easily applies to Assembly as well). Quote This means that they have lots of input and output parameters that are useful for arbitrary programming.Name ONE example of this. Just one. I assure you there isn't an example of this that applies. C functions that take many arguments, for example, printf(), uses varargs, which allow for any number of arguments. The low-level DESIGN results in no extra epilog or prolog code in the function itself. Quote In most cases just a fraction of these parameters are actually used. Again, citation needed. C++ overloads means that very seldom is even a single argument omitted or STUBBED out. Even for most typical case where the overloads each call into a function with more arguments, eg: Code: [Select]public int somefunction(int firstargument){ somefunction(firstargument,5);} public int somefunction(int firstargument,int secondargument){return firstargument ^ secondargument;} The result for calls to the first function will be a call to the second function with the given 5 as a parameter. However, even though the second parameter is "unused" it has absolutely no measurable impact on performance, even when done in tight loops. If you can provide a citation or proof that an extra parameter of this sort (or any number of extra parameters, below the ridiculous) has a perceptible performance impact, I'd LOVE to see it. Otherwise, it's just random musings. Additionally, if the two functions are flagged as inline, it's not atypical that the compiler will put the result inline where the function is called (eliminating any theoretical function call overhead, which itself can have a measurable impact if you are running 35 year old hardware). Quote This means that the compiler has to generate lots of machine code which never is used.No it doesn't. In fact the very purpose of functions is to generate less code. That is also why most compilers will not inline function calls above a certain size (and also why the inline directive in C and C++ is more a guideline to the compiler saying that it can be inlined, not that it should be inlined; depending on the compiler's analysis it might not inline it, because the result of inlining is always larger code (a la "generating more machine code") and it doesn't take long for any performance benefit to inlining to be LOST due to a larger amount of code being dealt with. Quote Which slows down the CPU due to the fact that it has to chew it all and not only the neccesary function.The only merit to your argument would be with prolog and Epilog code. Which can be confirmed via any number of code trace tools as taking very little processor time compared to everything else; even on a 8088 it was almost imperceptible. Anyway, to once again address your entire argument that Assembly is "Better" than other languages, it's not. Can a hand-tuned Assembly program run faster than an equivalent program written in another language? Yes. Yes it can. But that comes with costs. First, often the performance difference is barely measurable. Second, the actual extensibility and maintainability of the program- even a well commented one, drops to abysmal levels. The number of bugs skyrockets, not because of assembly itself being used, but simply because, by using assembly, the amount of that code skyrockets, resulting in more bugs. More code being written means more bugs, no matter how careful a program is or how skilled they are. Additionally, Transformation inefficiencies- (Which is the term for what you are talking about) Have been practically eliminated; Fact is that the idea of writing software is to get people to use it, and a piece of software that exists is going to sell a lot better than one that does not. We can wax philosophical about how "everything will be better using Assembly" but the thing is it doesn't work that way. Developing will be a gigantic pain in the *censored*, there will be absolutely no modularity to the design outside that provided by your assembler, which often uses macros that are completely vendor specific to the assembler itself, so you'll be dependent on a single vendor for your Assembler... UNLESS you write your own- and that's further development costs on top of that. So what do we end up with? Let's go with a large software product, like a Word Processor. Assembly brings- if written properly- speed improvements. 1. Word processors spend most of their time idle, and waiting for user input. You can't exactly idle for user input faster. 2. The Assembly doesn't magically make things faster. Even relatively advanced Assembly programmers don't match a good compiler, especially on modern instruction sets. The only time Assembly really works is as a low-level optimization for something that has already had it's performance squeezed. For example, Michael Abrash focusses primarily on Assembly optimizations in "Graphics Programming Black Book". One such discussion revolves around how his expertise was applied to optimize the Hidden Surface removal algorithms in Quake. The original algorithms first got improved; I believe there wer eseveral chapters of completely language-agnostic discussion about the correct choice of algorithm and why he/they decided to go with, say, Edge surface caching. The desired memory and CPU minimum for the game (a 486 with 8MB of RAM) were a roadblock to C at the time. C Compilers were alright, but you could still save a considerable amount of processing time and Memory when you did Assembly instead- but only when you did it properly. Another point is that the entire Quake source was not Assembly. Only time critical portions. Some other parts were temporarily converted to Assembly, but the performance gains weren't worth the loss in maintainability, (and sometimes there weren't any performance gains at all). The only parts that were CHANGED to use Assembly were the parts they were sure could benefit from some hand-tuned Assembly. Additionally, it was necessary- if the game ran appreciably well in a 486 with 8MB of RAM in C, iD would have never bothered to use Assembly for some of the segments of the game. Because it would have been nothing more than a waste of memory. All Programs can be made faster, but there is no point making a program that is fast enough faster, because that extra speed is meaningless. Today, those optimizations, which took John Carmack and Michael Abrash Several Months to work out and get implemented, do nothing for the game today. They were vital at the time in order to meet the goal (run on certain Processors and Memory requirements), but do nothing for it today. In fact, some programs that were optimized in Assembly run provably slower than a Higher Level language would, simply because the Assembly is static- The Quake Hidden Surface Removal did not magically use new instructions that were released in later processors when they were available. Compilers, however, can turn the same code into Machine instructions that take advantage of those instructions as necessary. Just-in-time compiled languages like Java and .NET languages add to this even further; you can run them on a variety of platforms, and ahve massive advantages; for example, a C# program can run in native x64 mode on 64-bit processors, and take advantage of any available instructions. It will use SSE, MMX, and other extended instructions if they are available. The advantage here is that the same source code compiles to all sorts of different machine instructions, optimized for various architectures as needed. With Assembly, you would spend months optimizing for a very specific processor and memory requirements, and sometimes those optimizations are wasted as your assumptions or testing of instruction cycles suddenly proves false on a later processor. This is exactly the reason why so many programs that were written in the 80s and early 90's using Assembly are absolutely useless today. It's not because Operating Systems and Processors have advanced as much as it is that the code was written with the assumption that both those things would stay static. |
|