Comparison of high-level computer programming languages

In summary, the author argues that computational speed is a matter of little concern to users, and that comparisons between languages are complicated.
  • #36
eachus said:
One part of my job at MITRE, and there were a half a dozen of us who did this, was to get all of the misunderstandings about Ada out of the software design rules well before coding started on Air Force electronics projects. Sometimes though we ran into managers who had added their own rules gotten out of a magazine somewhere.
In theory, our rules were guided by some Carnegie Mellon advice. I thought that their advice was very wise, flexible, and appropriate. The part that management disliked and eliminated from our rules was flexible.
The most important rule in Ada programming though, is that if the language seems to be getting in your way, it is trying to tell you something.
On a large program, it doesn't matter what the code is telling me. We have to follow the programming standards that management presents to the government.
 
Technology news on Phys.org
  • #37
FactChecker said:
In theory, our rules were guided by some Carnegie Mellon advice. I thought that their advice was very wise, flexible, and appropriate. The part that management disliked and eliminated from our rules was flexible.On a large program, it doesn't matter what the code is telling me. We have to follow the programming standards that management presents to the government.

We granted far more waiver requests than we turned down. The only one I can remember turning down was for 25 KSLOC of C. The project had no idea what the code did, since the author had left over two years earlier. I looked at the code and it was a simulation for a chip that had been developed for the project, to let them test the rest of the code without the chip. Since the chip was now there, I insisted that they replace the emulation with code (about 100 lines) that actually used the chip. Software ran a lot faster then. Another waiver request I remember was to allow for 17 lines of assembler. I showed them how to write a code insert in Ada. Issue closed.

In general, we found that the most decisive factor in whether a software project succeeded or not, was to divide the number of software engineers into the MIPS of development machines they could use to develop and test code. A number significantly under one was trouble, two or three no problem. Of course today everybody has a faster PC than that, so problems only came when the software was being developed in a classified lab
 
Last edited:
  • Like
Likes FactChecker
  • #38
mpresic said:
Back to the main point. Documentation, and understandability should be more of a priority than speed. You have engineers that can make the most high-level user-friendly language inscrutable, and you have engineers that can make (even) structured fortran or assembly language, understandable.
In general, there will be project requirements, and those requirements must be met. It sounds a bit religious to emphasize how one must address one potential requirement over another.

If I need to present results to a meeting that's two hours away, I will be concentrating of rapid short-term development and execution. If I need to control a mission-critical military platform that will be in service is 8 years, I will be concentrating of traceability, maintainability, ease of testing, version control, auditability, etc.

To address the OPs question:
If benchmarks using Navier-Stokes equations will document ground not covered in existing benchmarks, then there is potential use in it. I don't know much about Navier-Stokes equations, but if they are used in simulations that tend to run past several minutes, then there may be consumers of this data.

As far as using Matlab-generated C code, by all means include that in the survey. You will be documenting hardware, software, everything - version numbers, configuration data, and the specific method(s) used to implement the solution of each platform.

Since the code you produce will be part of your report, it should be exemplary in style and function.
 
  • Like
Likes FactChecker
  • #39
This thread reminds me of a PF Insights Article. The article and the ensuing discussion parallel this thread in many ways.

The article: https://www.physicsforums.com/insights/software-never-perfect/

The discussion: https://www.physicsforums.com/threa...r-perfect-comments.873741/page-2#post-5565499

I'll quote myself complaining that modern software engineering methods and discipline are not sufficiently down scalable , and that is a serious problem because of the IOT.

anorlunda said:
Consider a controller for a motor operated valve (MOV). The valve can be asked to open, close, or to maintain an intermediate position. The controller may monitor and protect the MOV from malfunctions. In the old days, the logic for this controller would be expressed in perhaps 100-150 bytes of instructions, plus 50 bytes of data. That is so little that not even an assembler would be needed. Just program it in machine language and type the 400 hex digits by hand into the ROM burner. A 6502, or 8008, or 6809 CPU variant with on-chip ROM would do the work. The software would have been the work product of a single person working less than one work-day, perhaps checked by a second person. Instantiations would cost about $1 each. (In the really old days, it would have been done with discrete logic.)

In the modern approach, we begin with standards, requirements, and design phases. then the logic would be programmed in a high level language. That needs libraries, and those need an OS (probably a Linux variant), and that brings in more libraries. With all those libraries come bewildering dependencies and risks, (for example https://www.physicsforums.com/threads/science-vulnerability-to-bugs.878975/#post-5521131) All that software needs periodic patches, so we need to add an Internet connection (HORRORS!:nb)) and add a user interface. With that comes all the cybersecurity, and auditing overhead. All in all, the "modern" implementation includes ##10^4## to ##10^6## times more software than the "old" 200 byte version, to perform the same invariant MOV controller function.

Now you can fairly call me old fashioned, but I find it hard to imagine how the world's best quality control procedures, and software standards could ever make the "modern" implementation as risk free or reliable as the "old" 200 byte version. Worse, the modern standards probably prohibit the "old" version because it can't be verifiabull, auditabull, updatabull, securabull, or lots of other bulls. I argue that we are abandoning the KISS principle.

Now, the reason that this is more than a pedantic point, is the IOT (Internet of Things). We are about to become surrounded by billions of ubiquitous micro devices implemented the "modern" way rather than the "old" way. It is highly germane to stop and consider if that is wise.
 
  • #40
.Scott said:
If benchmarks using Navier-Stokes equations will document ground not covered in existing benchmarks, then there is potential use in it. I don't know much about Navier-Stokes equations, but if they are used in simulations that tend to run past several minutes, then there may be consumers of this data.
Navier-Stokes equations are at the core of Computational Fluid Dynamics and are, indeed, used in very long series of runs. For instance, aerodynamics calculations that account for every combination of angle of attack, angle of sideslip, mach, altitude, and surface positions would take a very long time to run. Supercomputers are sometimes necessary.
 
Last edited:
  • #41
Consider the importance of near-time calculations to experimenters operating a wind tunnel to generate and collect fluid dynamics data for subsequent analysis.

Suppose we are testing a scale model of a Boeing 777 wing mounted in a subsonic wind tunnel to determine the effects a winglet has on laminar flow around the primary wing as alpha -- angle of attack -- varies. The wind tunnel software computes and displays Reynolds number -- a measure as laminar flow becomes turbulent -- alongside alpha to guide operations in near-time to maximize use of resources; perhaps by restricting angle of attack past a selected turbulence measure or inhibiting full-scale data collection when turbulence exceeds the flight envelope (operational limits) of an actual 777.

https://en.wikipedia.org/wiki/Reynolds_number .
See also "The Wind Tunnels of NASA" and NASA ARC Standardized Wind Tunnel System (SWTS).

The system programmer not only provides real-time data collection but near-time data sampling and computation of vital measurements such as Reynolds number while the experiment runs. The wind tunnel software system computes selected values as quickly and error free as possible in order to provide the best data during run time for later (super)-computation. The software engineer recognizes some computations are time critical for operational reasons. Later fluid dynamics computations could be time sensitive due to cost and super-computer time sharing.
 
  • #42
Ultimately all compilers, or the compiler used to compile the compiler, where written in C/C++. It can do anything with no restrictions. It's easy to write very efficient very fast code. it's also just as easy to shoot yourself in the foot with it. But remember the old adage "There will never be a programming language in which it is the least bit difficult to write terrible code". That said, C# does have one thing going for it in that the application developer can allow third parties and end users to extend the application through code while also restricting what system functions that code has access to. So users can share code on the internet without worrying about getting a virus, as long as the developer locked out I/O system calls, or put them behind appropriate custom versions of those functions.
 
  • #43
FarmerTony said:
end users to extend the application through code while also restricting what system functions that code has access to. So users can share code on the internet without worrying about getting a virus, as long as the developer locked out I/O system calls, or put them behind appropriate custom versions of those functions.

How does an end user do that?

Ho can an end user audit the safety practices of the developer?

As long as there is a "as long as" proviso:wink:, the prudent end user must presume that the proviso is not met.
 
  • Like
Likes FactChecker
  • #44
FactChecker said:
@eachus , I wish no offense, but in summary, was there a run-time difference between C and Ada? I like a "you were there" type description, but only after a summary that tells me if it is worth reading the details.
These type of timing comparisons of a single calculation done many times may not reflect the true difference between language execution speeds.

Results:
Multiplication result was 0.045399907063 and took 147318.304 Microseconds.
Exponentiation result was 0.045399907063 and took 0.291 Microseconds.
Exponentiation 2 result was 0.045399907062 and took 0.583 Microseconds.
Fast exponentiation result was 0.045399907062 and took 0.875 Microseconds.

Sorry if it wasn't clear. The first result, taking 0.147318 Seconds, was comparable to, but faster than, all the previously published results. The next three results took advantage of much better optimization, and took less than one microsecond, all were over 100,000 times faster than the first result. The fact that these three approaches took one, two, and three clock ticks should not be taken to mean one was better than the other three. (All were better than the first result.) If I really needed something that fast, I'd run the program 20 times or so to make sure that the results were consistent. But once you get the optimizer to knock out over 99,999 percent of the execution time, I wouldn't worry.
 
  • #45
eachus said:
But once you get the optimizer to knock out over 99,999 percent of the execution time, I wouldn't worry.
This sounds too good to be true. I don't know exactly what you ran or are comparing, but that is too much improvement from an optimizer. One thing to look for is that the "optimized" version may not really be looping through the same number of loops or the same calculation. That can be because some identical calculation is being done time after time and the optimizer moved that code out of the loop.

PS. Although pulling repeated identical calculations out of a loop is a good optimization step, it is not representative of the average speedup you can expect from an optimizer.

PPS. The last time I saw a speedup like that, the "fast" version had completely removed a loop to 1000 and was only executing the calculations once.
 
Last edited:
  • Like
Likes bhobba and anorlunda
  • #46
FactChecker said:
This sounds too good to be true.

Speed comparisons often reveal strange things:
http://alexeyvishnevsky.com/2015/05/lua-wraped-python/

It turns out for many tasks a highly optimized just in time complied language like Lua is as fast as C - the version of Lua is LuaJIT:
http://luajit.org/

But as the above shows even just interpreted LUA is pretty fast - the same a c in that application - but personally I use LUAJIT.

It's easy to call Lua from Python using a c program as glue. I personally, on the very rare occasions I program these days, just write it in Python. Usually its fast enough, but if it isn't do some write statements to see what bits its spending most time in and write it in Lua and call it from Python. For simple programs I just write it in Moonscript, which compiles to Lua from the start. I have never have done it except on a couple of programs while I was professionally programming, but for really critical parts I write in assembler. I only use C programs for glue - its good for that - most languages can call or call other languages using c. Although the link I gave used some functionality integrated into Python to execute Lua - called LUPA as an extension to CPYTHON. So for me it goes like this - Python, Lua and rarely assembler.

Thanks
Bill
 
Last edited:
  • #47
bhobba said:
Speed comparisons often reveal strange things:
http://alexeyvishnevsky.com/2015/05/lua-wraped-python/

It turns out for many tasks a highly optimized just in time complied language like Lua is as fast as C - the version of Lua is LuaJIT:
http://luajit.org/
The languages discussed were Ada and C. I don't know what exactly was being compared or run when the claim was that the optimizer option sped execution up by a factor of 100 thousand times. No optimizer can do that. It implies that a version of Ada or C was inconceivably slow.
 
  • #48
FactChecker said:
The languages discussed were Ada and C. I don't know what exactly was being compared or run when the claim was that the optimizer option sped execution up by a factor of 100 thousand times. No optimizer can do that. It implies that a version of Ada or C was inconceivably slow.

Nor do I. I was simply pointing out speed comparisons are a strange beast. I highly doubt any optimizer can do that - the big speed ups usually come from two things:

1. Static typing like you can do in CYTHON
2. Compiling rather than interpreting. Just In Time Compiling is nowdays as fast as actual compiling (GCC binaries now run as fast as LLVM) hence LLVM being on the rise as a language programs are compiled to and you simply implement LLVM on your machine. I suspect they will eventually exceed the performance of optimized direct compiles - just my view.

But to be clear you do NOT achieve that type of speed up with optimizing compilers. JIT compilers and optimizing them seems the way of the future - but that will not do it either.

Thanks
Bill
 
  • #49
Interpreted languages are a different beast from C or Ada and large speed differences should not be surprising. But those types of speed issues are caused by drastically different and identifyable approaches. Often the solution is to invoke libraries written in C from the higher level interpreted language. Python is known to be relatively slow and to benefit from the use of faster libraries.

That being said, I have never seen a speed difference as large as 100 thousand times unless a looping process with thousands of loops was completely eliminated. In my experience, even context switches and excessive function calls do not cause those types of slow downs. It is possible that the computer operating system is messing up one of the options and not the other, but I am assuming that those issues have been eliminated.
 
Last edited:
  • Like
Likes bhobba
  • #50
Ok, A lot of these responses are exceptionally valuable in their own right, so I won't go into details, but I would suggest you question why you're asking this question (no pun intended).

On the one hand, everything in a high-level language has to be done on a low level, and low level is typically faster. So should you always use C over Matlab?

No. In fact since Matlab is a high-level language, it can do many things under-the-hood, that you might not necessarily need to get involved with. For example, should you be using integers, longs, 128 bit integers? What if you need to do that dynamically? What about multithreading? Do you really want to get involved with Mutexes, race conditions and shared memory?

If you know for a fact, on the machine level, what you want to be doing, and that is the absolute best that you know of, C/C++/D have no substitute. They do the least amount of work for you and are compiled languages, so the tradeoffs are in your favour. But it will take a longer time to write.

If, on the other hand, you know what your result looks like and you'd be Googling the algorithm to do that efficiently, then you're better off using a pre-built library. In fact, even the most inefficient platform, since it does a lot of the optimisations for you, will outperform your C++ code, simply because it knows better.

So the real question to ask, is what is more important to you: getting the results tomorrow by writing low level code for a day, that displays the results near instantly, or to write code that takes a few minutes, but that you could jot down in an hour. If it's the results you want, then obviously use the high-level stuff. If you want to re-use your code as a library, then use the former.

It's not a simple solution.
 
  • Like
Likes FactChecker
  • #51
One thing I think is undeniably true, is that programming languages are the most fun of all topics among programmers.

I'm reminded of when the Ada language was first introduced. They published a document called the rationale, explaining why they wanted this new language. The rationale (to the best of my memory) said that in the history of DOD software projects, that every single project created it's own language. The exception was Jovial which had been used in 2 projects. Ada was intended to be the one and only language for all future projects.

So, did Ada become the language to end all languages? Heck no.

I'm confident that as long as humans write software, they will continue creating new programming languages, and there will be a credible rationale for each and every one of them.
 
  • Like
Likes bhobba, Klystron and FactChecker
  • #52
Alex Petrosyan said:
So the real question to ask, is what is more important to you: getting the results tomorrow by writing low level code for a day, that displays the results near instantly, or to write code that takes a few minutes, but that you could jot down in an hour. If it's the results you want, then obviously use the high-level stuff. If you want to re-use your code as a library, then use the former.
Good advice, but I think that you are being very conservative in your estimates. Using a low-level language to mimic what one can get in one hour of MATLAB programming could easily take weeks of programming.
 
  • Like
Likes Alex Petrosyan and jedishrfu
  • #53
FactChecker said:
Good advice, but I think that you are being very conservative in your estimates. Using a low-level language to mimic what one can get in one hour of MATLAB programming could easily take weeks of programming.

That’s assuming you could get equivalent behaviour. Most Matlab functions are exceptionally smart and catch things like poor conditioning early. Besides, when was the last time Python segfaulted because you used a negative array index?
 
  • Like
Likes FactChecker
  • #54
I think for general benchmarks (i.e Python vs Java) there are already good ball-park figures out there (i.e https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html and https://www.techempower.com/benchmarks ) but for real-world application its not really worth talking about a single core execution or even single machine execution.

So really it comes down to speed of iteration, concurrency, parallelism, and community. I personally would not reach for C/C++ as it does not pass 'speed of iteration' mustard test, or community for that matter. So in my humble opinion:

For small-scale application, data science, and proof of concept work - Python3 is lingua franca.

For really-really large-scale applications with multiple distributed teams working together, deployed across 10K+ servers, there are really only 2 choices - Java, and if you like to skate uphill and write most of your own libraries for everything, Golang. There is also Scala as a contender, but it has its own issues (as in: all software problems are people problems, and with Scala you'll get "implicit hell").
 
  • #55
Python, Java, Julia, whatever: You are all assuming that there exists a software "machine" that handles all the difficult parts for you. Some of us do not have that luxury - writing device drivers, interrupt handlers, process schedulers and so on. In that case your environment and requirements are radically different:
  • You are writing on "bare metal". No libraries are available to help with the difficult parts.
  • Usually your routines have to be short, fast and error-free. An Ethernet hardware driver is called millions of times each day - bugs are not tolerated
  • Debugging the routines calls for very special equipment (you can not insert debugging printouts, since the high-level printing routines are not available)
Here is an example of a small part of an interrupt driver for an Ethernet hardware chip:
Code:
/*m************************************************************************
***  FUNCTION: _ecInitInter
***************************************************************************
***  PURPOSE:  Sets up the interrupt structure for EC
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890711
***  LAST CHANGED BY: Svein Johannessen 900216
**************************************************************************/

#include "ec.h"
#include "sys/types.h"
#include "sys/mbuf.h"
#include "ecdldef.h"
#include "ecextrn.h"
#include "net/eh.h"

void (* _ecRx)() = NULL;
void (* _ecTx)() = NULL;
void (* _ecFatal)() = NULL;

short _ecRxRdy();
short _ecTxRdy();

void interrupt EC_INT();

u_char int_babl;                    /* babble */
u_char int_miss;                    /* missed packet */
u_char int_merr;                    /* memory error */
u_char int_rint;                    /* rx packet */
u_char int_tint;                    /* tx packet */
u_char int_idon;                    /* init done */

u_short _ecMERR;
u_short _ecLastCSR0;

EXPORT short _ecInitInter(eh_idone,eh_odone)
void (* eh_idone)();
void (* eh_odone)();
{

    _ecRx = eh_idone;
    _ecTx = eh_odone;
    _ecFatal= NULL;
    _ecMERR = 0;
    _ecLastCSR0 = 0;

    /* Here someone must set up the PC interrupt vector ... */
    if ( ( _ecRx == NULL ) || ( _ecTx == NULL ) )
         return ERROR;
    return NOERROR;
}

/*f************************************************************************
**  FUNCTION: _ecRxInt
***************************************************************************
***  PURPOSE:  Handles a receive interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890711
***  LAST CHANGED BY: Svein Johannessen 900216
**************************************************************************/

static void _ecRxInt()
{
    struct  mbuf *cur_buff;
    register short rxerr, good;

    /* see if the LANCE has received a packet  */
    rxerr = _ecRecPacket(&cur_buff);        /* get address of data buffer */

    if ( cur_buff != NULL ) {
      good = (rxerr==NOERROR) && !(int_miss || int_merr);
      (*_ecRx)(cur_buff,good);
      }
    else
         int_rint = 0;
    (void)_ecAllocBufs();         /* Allocate more buffers */
}
/*f************************************************************************
***  FUNCTION: _ecTxInt
***************************************************************************
***  PURPOSE:  Handles a transmit interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890712
***  LAST CHANGED BY: Svein Johannessen 900418
**************************************************************************/

void _ecTxInt()
{
    struct  mbuf *cur_buff;
    u_char  TxBad;
    short good, Coll;

    TxBad = _ecCheckTx(&cur_buff, &Coll);
    good = !(int_babl || int_merr || TxBad);
    if (cur_buff!=NULL)
      (*_ecTx)(cur_buff,good,Coll);
}

/*f************************************************************************
***  FUNCTION: _ecIntHandler
***************************************************************************
***  PURPOSE:  Handles an interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890712
***  LAST CHANGED BY: Svein Johannessen 900418
**************************************************************************/
/**
***  OTHER RELEVANT  :
***  INFORMATION     :
***
**************************************************************************/

extern short num_rx_buf;             /* wanted number of rx msg desc */
extern short cnt_rx_buf;             /* actual number of rx msg desc */

void _ecIntHandler()
{
    register u_short IntStat;
    register u_short ErrStat;

    IntStat = RD_CSR0;

    while (IntStat & INTF) {
      _ecLastCSR0 = IntStat;
      int_babl = ((IntStat & BABL)!=0);
      if ( int_babl )
           WR_CSR0( BABL);
      int_miss = ((IntStat & MISS)!=0);
      if ( int_miss )
           WR_CSR0( MISS);
      int_merr = ((IntStat & MERR)!=0);
      if ( int_merr )
      {
            _ecMERR++;
          WR_CSR0( MERR);
      }
      int_rint = ((IntStat & RINT)!=0);
      if ( int_rint )
        WR_CSR0( RINT);
      while ( int_rint ) {
        _ecRxInt();
        int_rint = _ecRxRdy();
        }
      int_tint = ((IntStat & TINT)!=0);
      if ( int_tint ) {
        WR_CSR0( TINT);
        _ecTxInt();
        }
      int_idon = ((IntStat & IDON)!=0);
      if ( int_idon )
           WR_CSR0( IDON);
      if ( int_miss && (cnt_rx_buf==0)) {
           _ecDoStatistic(FALSE,FALSE,int_miss,FALSE);
           (void)_ecAllocBufs();         /* Allocate more buffers */
      }
      if (_ecFatal!=NULL) {
        ErrStat = 0;
        if ((IntStat & TXON)==0)
          ErrStat |= EC_TXSTOPPED;
        if ((IntStat & RXON)==0)
          ErrStat |= EC_RXSTOPPED;
        if ( int_miss && (cnt_rx_buf!=0))
          ErrStat |= EC_SYNCERROR;
        if (ErrStat!=0)
          (*_ecFatal)(ErrStat);
        }
      IntStat = RD_CSR0;
      }
    WR_CSR0( (INEA | CERR));
}

/*f************************************************************************
***  FUNCTION: _ecInterrupt
***************************************************************************
***  PURPOSE:  Receives an interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890830
***  LAST CHANGED BY: Svein Johannessen 890830
**************************************************************************/

void interrupt _ecInterrupt()
{
    _ecIntHandler();
}

/* End Of File */
 
  • #56
Svein said:
Python, Java, Julia, whatever: You are all assuming that there exists a software "machine" that handles all the difficult parts for you.

Well, the thread title is "Comparison of high-level computer programming languages" (emphasis mine).
 
  • #57
cronxeh said:
but for real-world application its not really worth talking about a single core execution or even single machine execution.
Your "real-world" is far different from my "real-world".
 
  • #58
FactChecker said:
Your "real-world" is far different from my "real-world".

yes, but are they both equally imaginary?
 
  • #59
Vanadium 50 said:
Well, the thread title is "Comparison of high-level computer programming languages" (emphasis mine).
Yes, but what exactly does it mean?
  • High-level as in "more abstract than assembly language"?
  • High-level as in "will only run on a high-level computer (containing a mass storage device and a sophisticated operating system)"?
 
  • #60
Normally I'd say the first one, but the OP seems to want to compare efficiency of 3.5-4GL math suites, presumably ignoring 3GL offerings, or 2GL possibilities.
 
  • #61
I think there is a generational divide. I have always considered C/C++ to be "higher level", but that seems very out of date now. People can produce programs using MATLAB/Simulink or MatrixX/SystemBuild that would have been inconceivable long ago. And I am sure that others have similar experience with other 4th generation languages.

PS. I will never forget my reaction when MathCad gave us a language that automatically converted units and helped in dimensional analysis but the aerospace industry turned to Ada, which enforced everything but helped in nothing. IMHO, that was a HUGE step backward from 4'th generation languages.

PPS. I would still consider C/C++ to be an essential language for any professional programmer.
 
  • Like
Likes Klystron and S.G. Janssens
  • #62
FactChecker said:
I think there is a generational divide. I have always considered C/C++ to be "higher level", but that seems very out of date now.

I don't think it's out of date - at least not for modern C++ - but perhaps that just means that I am myself out of date.
 
  • #63
I'm rather more "out of date" than probably anybody else, but my (pro) experience is in CoBOL (which really does deserve the levied humour, but also really does run rings around anything else in its domain). I tend to use 4GL's as analysis tools to get a grip on the problem, rather than production languages.

On the other hand I worked with a rather more experienced (ie: older, with a previous generation methodology under his belt) programmer who could work wonders with a 4GL in a production environment... granted, by basically ignoring all the "fancy" stuff and concentrating on its capabilities closest to the metal.

Just wondering why I haven't seen any references to ForTran or Algol in the last four pages. Surely they both have OO, advanced libraries and decent graphics capabilities by now ?
 
Last edited:
  • Like
Likes FactChecker
  • #64
I think that a targeted language like COBOL has great advantages over general purpose languages. I have some experience with several simulation languages for discrete event, continuous, and mixed models, statistical languages like R and SAS, scripting languages, etc. They are all better in their domain than general purpose languages.
FORTRAN has advantages for engineering analysis that I have never seen matched. I especially like the namelist capability for easily printing and reading large amounts of data in a readable format. Many programmers think that they can duplicate that capability in C or C++, but they never succeed.
 
  • Like
Likes Klystron
  • #65
AHAH. you cannot compare code with other if it is not optimized.
Try ATLAS (Autotuned Lapack) lib on linux, hard to use but so fast ! People optimize it to death since 30 years.

"The C++ code used simple Gauss-Jordan elimination taken from the book "Numerical Recipes in C" :
I think you can gain a 10 factor with ATLAS. maybe other language like Julia or R use different algorithm like : preconjugated gradient (ultra ultra fast for sparse matrix), decomposition method, etc...

For a easy use, Eigen 2 is the best in C++
 
  • #66
From experience on FORTRAN, C/C++, Pascal, Common-Lisp, smalltalk; within object oriented programming.

FactChecker said:
[post edited for brevity.]
FORTRAN has advantages for engineering analysis that I have never seen matched. I especially like the namelist capability for easily printing and reading large amounts of data in a readable format. Many programmers think that they can duplicate that capability in C or C++, but they never succeed.

FORTRAN proved an excellent language for real-time data collection, filtering, and storage; CFD and similar computational models. Good performance and internal operations. Largely intuitive flow control. Impressive ability to optimize interrupts and distribute processes. Mature compiler. Little or no language derived internal data loss or corruption that I know.

C/C++ runs like a different animal. When programming master control code where human lives are at risk; e.g., in a full-motion simulator; when high-level iterative code requires direct access to low-level synchronization pulses; when sys-level code needs to flip bits and read/write registers; choose C++.

C++ operated well driving human factor repetition rates around 30 frames/sec. With persistence of vision ~7 frames, a 30hz frame rate provides over 4x margin for visual and audio displays. Conditional operator ( example "if" statements) rates actually assist performance with I/O functions during frames. C++ easily controls other electronic devices.

smalltalk, Common Lisp honorable mention. Simplify language into two objects: variables and functions. Although I rarely used these languages professionally, they taught me much about functions, manipulating objects and led to ideas for adaptive structures such as sparse matrices, data driven adaptations, and error correcting code.
 
Last edited:
  • #67
kroni said:
AHAH. you cannot compare code with other if it is not optimized.
Try ATLAS (Autotuned Lapack) lib on linux, hard to use but so fast ! People optimize it to death since 30 years.

Using ATLAS is well worth the effort today if you do a lot of linear algebra. (Matrix multiplication, matrix inversion, eigenvalues, etc.) ATLAS generates a BLAS (basic linear algebra subroutines) that is tuned for the exact machine you ran it on. ISA, cache sizes, cpu speed, memory speed, etc. are all taken into account. Then you can use BLAS directly, or use LAPACK or LINPACK which use the BLAS routines with higher-level interfaces.

Back to another topic, if you are writing a program that you will run once, or a few times, high levels of optimization are a waste of time, even if a compiler is doing it for you. If you are writing code that will be run millions of times, or code that needs to meet real-time execution constraints, the more the compiler can do for you, the better. In particular, the SPARK compiler (different from Apache SPARK which is a web application framework) makes writing real-time code much easier. (Easier than other hard real-time tools anyway.)

What makes hard real-time hard? The difference between your first-cut prototype and the requirements. Have I seen cases where that required a million times speed-up? Yep, and made it too. Some of that speedup came from better algorithms and better optimization, but most of it was taking the best algorithm and coding it to work with very sparse matrices.
 
  • Like
Likes Klystron
  • #68
Out of personal experience, choosing the right language can have up to several percent speedup, if the language has a decent compiler and you know how to turn on all of its optimisations. Hell even Java which should run infinitely slower runs within 5% of some of the C++ that I've written. If you want to optimise - do so for readability in your programming style/paradigm. Even though C++ has the crown for the highest performance code, it can be outperformed by rust, go and Ocaml, not because the languages are better, but because the paradigm that they enforce requires you to write easily optimiseable code, that you can also more easily understand.
 
  • Like
Likes Klystron
  • #69
Alex Petrosyan said:
Out of personal experience, choosing the right language can have up to several percent speedup, if the language has a decent compiler and you know how to turn on all of its optimisations. Hell even Java which should run infinitely slower runs within 5% of some of the C++ that I've written. If you want to optimise - do so for readability in your programming style/paradigm. Even though C++ has the crown for the highest performance code, it can be outperformed by rust, go and Ocaml, not because the languages are better, but because the paradigm that they enforce requires you to write easily optimiseable code, that you can also more easily understand.

Design remains critical not only to optimization but also to operation. Active C++ code runs robustly with attention to error conditions.

C languages produce terse code. Write voluminous comments. The interpreter filters out comments.
 
  • #70
Alex Petrosyan said:
Even though C++ has the crown for the highest performance code, it can be outperformed by rust, go and Ocaml, not because the languages are better, but because the paradigm that they enforce requires you to write easily optimiseable code, that you can also more easily understand.
For numerical calculations (as in a lot of scientific/physics programs), FORTRAN is reputed to be the fastest. It is targeted at those types of programs and avoids tempting features that tend to slow calculations down. I have not personally done tests to confirm that reputation of FORTRAN, but I have seen a few.
 
Back
Top