Programming appeared long before the 50s of the XX century. The first ideas were expressed by Charles Babbage (1792-1871), who is rightfully considered the father of the computer. He did not know about transistors, microcircuits and monitors, but he described the basic principles on which all computers will be built accurately enough. The idea was developed by Countess Ada Lovelace (1815-1852). Its place in history still causes a lot of controversy, but one thing is absolutely certain – it was Ada who actually became the first famous programmer. Thanks to her work, it became clear that the way to efficiently use machines is the algorithms described in the code.
Covering the basics of Control Language (CL) programming as well as the latest CL features—including new structured-programming capabilities, file-processing enhancements, and the Integrated Language Environment—this resource is geared towards students learning CL. The book guides readers towards a professional grasp of CL techniques, introducing complex processes and concepts through review questions, hands-on exercises, and programming assignments that reinforce each chapter’s contents. In addition to 25 chapters that cover CL from start to finish, a comprehensive appendix with condensed references to the most commonly used CL commands is also included along with two additional appendixes that cover the essentials of programming tools and debugging.
But programming could not have developed in isolation from computers. Without them, it’s just mind games, abstraction, regardless of the quality of ideas. Therefore, until the 1950s, programming languages were a set of machine instructions, often highly specialized and dying out along with the target device.
The essence of the problem
Today you do not need to know anything about the architecture of a computer, for most programmers only the language is generally important, everything else is secondary. In the 1950s, everything was different – you had to work with elementary machine codes, which is almost like programming with a soldering iron.
Another problem was that the people directly involved in the creation of computers were responsible for the development of languages - primarily engineers and only forced programmers. Therefore, they represented the language in the form of a sequence of numbers of operations and memory cells. Roughly speaking, it looked like this:
01 xy – add the contents of memory cell y to cell x;
02 xy – the same procedure with subtraction.
As a result, the program code turned into an endless string of numbers:
01 10 15 02 11 29 01 10 11 …
Today, this code will seem dreadful to you, but in the early 1950s it was the norm.
Programmers had to study machine instructions for a long time, then carefully write the code, and after completion, recheck it several more times – the risk of error was great. Problems arose when the development of machines was hampered by a lack of staff to write programs. An urgent decision was required.
The solution lay on the surface: it was necessary to translate the numerical designations of operations into letters. That is, instead of “01 10 15” use “ADD 10 15”. This required additional translation of characters into the machine command, but considering the problem, the sacrifice was minimal.
The solution turned out to be so obvious that it is not known for certain who first invented the assembly language. Most likely, it appeared simultaneously in several places at once. The authors of the book “The preparation of programs for a digital computer” by Wilkes, Wheeler and Gill are considered responsible for the title and popularization. It is easy to guess that the name Assembler comes from the English word assemble – to assemble, to assemble, which quite accurately describes the process. Later, symbols began to concern not only the simplest operations, but also addressing, which greatly simplified the readability of the code.
Now this seems like an elementary solution, but then the implementation was a complex process that required the creation of correspondence tables, assigning a designation to each memory cell. This led to three fundamental things:
- The emergence of the concept of a symbolic variable or just a variable.
- Creation of tables with the help of which you could find the correspondence of symbols, operations and memory cells.
- Understanding that programming can be an art.
This was the catalyst for a language breakthrough.
Compilers and biases
The assembler made it possible to create simple transformations. For example, translating 01 to ADD. The macro assembler expanded on this idea and provided programmers with the ability to collapse several instructions into one. For example, if in a program you were constantly adding a value to a memory location and checking to see if it was full, you could write all this into the INCRT macro and use it by changing only the variables. In fact, macro assemblers became the first high-level languages.
But there was an important problem with this approach – every time, before creating the code, it was necessary to fold the basic operations into macros. A tool was needed that would free programmers from constant copying. This is how the compiler appeared.
Now we know that thanks to the compiler we can create a programming language with absolutely any syntax, the main thing is that it correctly translates our code into machine instructions. At that time, experts were skeptical about high-level languages. This was partly due to the performance of computers – the simplification of syntax with complex transformations was expensive, it could return technological progress several years ago. Part of the reason was emotion – it was hard to move away from the form of machine instructions, to lose control over the processes. Programmers were seriously afraid that after compilation they would not be able to understand the executable commands. Today we don’t give a damn what machine code looks like, but in those days it seemed like an important problem.
Nevertheless, the compiler was the only way out of the situation, but another difficulty appeared here – arithmetic expressions. Their execution is not the same as how the machine reads the code. From the school course, we know the order of calculations in the expression “2 + 3 * 5”, but the machine reads the code in one direction, so the answer will be wrong. Yes, the above example can be solved by creating a macro, but for complex expressions of the level “(2 + 3 * 5 + 4/6) * 10 + 16- (14 + 15) * 8” a fundamentally different approach was required.
The era of a new formation
John Backus, the creator of Fortran, managed to find a stack analysis algorithm. He started working on it in 1954 and it took him almost 5 years to prove the right of high-level languages to exist. Fortran’s full name is The IBM Formula Translating System, or FORmula TRANslator. Despite its 60 years of age, it remains one of the most popular programming languages and is incredibly in demand in Data Science. During this time, we saw many versions: Fortran 1, II, 66, 77, 90, 95, 2008, and another one will be released next year (Fortran 2015 was planned, but due to delays, the name may change to 2018). It was in Fortran that many attributes of a high-level language were simultaneously implemented for the first time, including:
- arithmetic and logical expressions;
- DO loop (an early form of FOR loop);
- conditional IF statement;
Another important Fortran legacy, which modern programmers don’t even know about, is the use of variable constraints for integers. All of them had to start with one of 6 characters I, J, K, L, M, N (derived from I-Nteger). This is where the habit of taking the variables i, j, etc. for enumerations came from.
At the same time, Fortran remained a language close to machines. For example, there was something like this:
if (expression) doneg, dozero, dopos
The reason was the architecture of the IBM computer, which required an instruction to use the correct register: negative, zero, or positive. The closeness to machines was also manifested in the famous GOTO command (later it was inherited by Basic), which meant a direct transition to one or another command.
Returning to the problem of arithmetic expressions, the stack-busting algorithm (that is, parsing the entire string) was not an efficient solution, but it proved how simple and logical the implementation can be.
Languages for everyone
Fortran 1 was a scientific language based on complex number and floating point operations. He did not even know how to process text, for this he had to convert it into special codes. Therefore, Fortran was not suitable for business, where the Cobol language was specially created.
His syntax is fundamentally different, as close as possible to natural English. There was practically no arithmetic, only expressions like:
Move Income To Total Subtract Expenses
Cobol became the personification of the maximum distance from the former machine-arithmetic thinking to the universal one. And most importantly, now it was possible to work with text and notes.
The next fundamental language was Algol (ALGOrithmic Language), intended for scientific reports and publications. For the first time, things natural for us appeared in it:
- differences between assignment: = and logical equality =;
- using a for loop with three arguments: initial value, limit, step;
- the block structure of programs, enclosed between begin and end, this eliminated the need for a GOTO.
It was from Algol that C, C ++, C #, Java and many other popular languages came from.
The fourth whale of the 1950s was Lisp (LISt Processing language), designed specifically to serve artificial intelligence. Its main feature is not working with imperative data, but with functions. To do this, John McCarthy had to provide many mechanisms for normal operation: dynamic typing, automatic memory allocation, garbage collector. Ultimately, it was Lisp that became the progenitor of languages such as Python and Ruby, and is still actively used in AI today.
Thus, the 1950s changed the mindset of programmers, donated four fundamental languages, and set the world on the rails of the computer revolution.
Masterminds of Programming features exclusive interviews with the creators of several historic and highly influential programming languages. In this unique collection, you’ll learn about the processes that led to specific design decisions, including the goals they had in mind, the trade-offs they had to make, and how their experiences have left an impact on programming today.