[clean-list] "code is data" and Clean

Mark Phillips marrieda@lawyer.com
Tue, 04 Mar 2003 00:07:44 -0500


Hi Marco,

Thanks for your thoughts on the questions I raise.  I am trying
to get a better grasp of what is and is not important in a
programming language and this discussion is very useful.

> I think that some Clean proponents would argue that code manipulation 
> is "not done". They would consider the need to do such a thing to be 
> a weakness in a programming language like Clean. Not to say that LISP 
> is "weak". Clean and LISP are strong in different areas, just like 
> Perl and C are. Or Java and C++ for that matter.  Things to consider:

Lisp proponents argue that code manipulation ability is a strength.
They say it means one is not limited to the programming ideas in
vogue when the language was invented.

Yes I have heard the argument "there is no 'best' language, just
different languages with different strengths".  There is an element
of truth to this, yet I do believe there are some languages which
are generally better than other languages.  However I have not come
to firm conclusions about what constructs are most important in a
language.

> - There is little need to rewrite code in order to arrive at powerful 
> functions. You can write powerful functions directly in Clean. The 
> fact that this is highly appreciated in LISP does rather (in my 
> humble opinion) demonstrate some sort of weakness in LISP's simple 
> syntax.

Perhaps you are right.  But Lisp proponents seem to argue that you
are bound to come across an example which Clean doesn't handle at
all well - and then you are stuck - you pay a high price.  Lisp
people argue that S-expressions and macros mean that you avoid
paying this high price.  Sometimes you pay a low price, eg when
something can be expressed more cleanly in clean - but this price
can be made small through the use of macros.

> - Clean does not interpret expressions. It compiles them. And Clean 
> is a strongly typed language. This means that many errors are caught 
> at compile-time. In this respect Clean's design philosophy is 
> fundamentally different from LISP's. 

I like the idea of strong typing.  But why not add strong typing to
Lisp?  Why not add the advantages of Clean to a Lisp like language?
Perhaps there are good reasons - I don't know them yet however.

> 
> - Clean has no standard "eval"-like function that interprets and 
> evaluates some piece of data. So it is not _merely_ a matter of 
> matching function syntax and data syntax. One could however, easily 
> write an interpreter for any LISP-like (or Clean-like) language in 
> Clean as well (but also see the next points, before you argue that 
> this is not quite what LISP offers).
> 
> - Rewriting expressions for the sake of efficiency is rather 
> considered a task of the compiler. It is true that some of this is 
> still somewhat a promise. I think that Clean (as Haskell) could do 
> with better forms of compile-time evaluation of expressions (but that 
> is being worked on as far as I know). Ultimately, optimising an 
> interpreter for a given input program would deliver a compiled 
> version of that program. Whether that goal is close is an entirely 
> different matter.
> 
> - I think that Clean would come closer to LISP if it started to 
> incorporate some form of runtime (re)compilation of expressions. This 
> seems somewhat related to the current efforts that are put in Dynamic 
> types (also in the sense that the runtime infrastructure is getting 
> into place). But note that just as the use of Dynamic types may 
> introduce runtime typing errors, the use of runtime compilation would 
> introduce runtime compiler errors, which seems somewhat opposite to 
> catching errors as early as possible.

I am curious to know more about what Clean's dynamics really are
and really do.  How close do they come to Lisp's dynamic nature?

I agree that catching errors early is good... unless this requirement
prevents you programming more powerfully.  Why not do the first where
possible and the second elsewhere?

> Clean uses graph rewriting internally for its implementation, and its 
> semantics are defined in terms of graph-rewriting. Clean's syntax is 
> not. So Clean rewrite  expressions aren't graphs. They are rules for 
> manipulating them. And as such, they cannot manipulate themselves.

I wonder whether it would have been possible for Clean to be
designed with the graphs explicit?  And if so, I wonder why they
chose not to.

Cheers,

Mark.

-- 
__________________________________________________________
Sign-up for your own FREE Personalized E-mail at Mail.com
http://www.mail.com/?sr=signup