Lex

LEX is a tool used to generate a lexical analyzer. LEX translates a set of regular expression specifications into a C implementation of a corresponding finite state machine. This C program, when compiled, yields an executable lexical analyzer. 
The source program is fed as the input to the the lexical analyzer which produces a sequence of tokens as output. Conceptually, a lexical analyzer scans a given source program and produces an output of tokens.
Each token is specified by a token name. The token name is an abstract symbol representing the kind of lexical unit, e.g., a particular keyword, or a sequence of input characters denoting an identifier. The token names are the input symbols that the parser processes. 
Lex was developed by Mike Lesk and Eric Schmidt at Bell labs. 
```
“integer”  	{return ID_TYPE_INTEGER;}
```
This example demonstrates the specification of a rule in LEX. The rule in this example specifies that the lexical analyzer must return the token named ID_TYPE_INTEGER when the pattern “integer” is found in the input file. A rule in a LEX program comprises of a 'pattern' part (specified by a regular expression) and a corresponding (semantic) 'action' part (a sequence of C statements). In the above example, “integer” is the pattern and {return ID_TYPE_INTEGER;} is the corresponding action. The statements in the action part will be executed when the pattern is detected in the input.
<b>The structure of LEX programs</b>
A LEX program consists of three sections : Declarations, Rules and Auxiliary functions
```
DECLARATIONS
%%
RULES
%%
AUXILIARY FUNCTIONS
```