Interpreters present an interface to the user such that it appears that the computer is executing the SourceCode. With the exception of MachineLanguage, this can't actually be done, so instead it's done with smoke and mirrors. The two common approaches are incremental compilation (either to an intermediate form or to MachineLanguage) and writing a VirtualMachine which translates the SourceCode into MachineLanguage an instruction at a time as it is executed.

BASIC, Scheme and Haskell are examples of languages that are commonly interpreted.

The line is blurry at best, though.

By the above definition, Java is also usually interpreted, yet because its VirtualMachine's intermediate MachineCode is stored in class files, people regard it as a compiled language. Perl on the other hand is regarded as interpreted, even though it compiles the source to an optree - simply because it does not save this optree to disk. Even so there do exist environments (such as Apache mod_perl) which cache the compiled optree for reuse.

On the other hand, a lot of modern CPUs do not implement all of their MachineLanguage in hardware - rather, they contain MicroCode for some of their more complex instructions, which is interpreted and executed as a series of "real" operations.

For all intents and purposes, attempting to make a clear distinction between Compilers and Interpreters is impossible. It is therefor also silly to talk of interpreted or compiled ProgrammingLanguages. Any language could be interpreted or compiled, whether a Compiler or interpreter exists or not.