On the semantics of programming languages
by Michał Grabowski
Complex constructions of a high level programming language are usually explained in terms of a translator (interpreting program) of the given language. For instance, the riddle
is often explained in terms of activation record stacks, static links, procedure code addresses etc. You might ask then: "What's the good of having a high-level language, if you have to get down to a much lower level in order to understand it well?" What's more, it quickly turned out that e.g. PASCAL in the version of one translator is not exactly the same PASCAL as yielded by another one. But then, what is PASCAL, in fact? For example, do declarations of variables have any context-independent meaning, as "things in themselves", or is their meaning provided by the instructions that follow them?
By the end of the 1960s the creation of computer independent methods for defining the meaning of programming language constructions was considered a major intellectual challenge. The first satisfactory solution (denotational semantics) came from a group of mathematicians and computer scientists from the Oxford University in the 70s.
Is there a formal deduction system, that would allow to prove that a program computes some given function without making reference to any translator? Yes, such formal systems indeed exist. To prove their consistency is a very hard task that leads to very subtle semantics of programming languages and to considerations about object types. Even very abstract versions of problems that may appear are so complex, that they require tools derived from domain theory, the l calculus, intuitionistic logic, category theory, game theory and computation complexity theory.