uniqueness and strictness unified?

Erik Zuurbier F.S.A.Zuurbier@inter.nl.net
Sun, 8 Mar 1998 15:23:19 +0100


I have been working with Clean and its uniqueness for some years
and I have always found the uniqueness checker more restrictive
than I thought it should be. For the purposes of uniqueness, IO
and destructive updates, I reasoned that it should not be a problem
to have multiple read-only references to the object, as long as there
is at most one destructive reference. The code generator should then make
sure that all read-only references have vanished (resolved) before
the destructive update takes place. Not so with the actual uniqueness
checker.

Today I came up with an idea so simple that I cannot imagine that it
has not been pursued before.

If a function is strict in an argument, we annotate that and then
the compiler knows that "this function surely needs this argument".

In this spirit, why should a function that may destroy an argument
(or change it in place) not be annotated as such, so that the compiler
knows that "this function may destroy this argument". We could say
that the function has a 'destructive reference' to that particular argument.

A program analysis along the lines of strictness-analysis might then
check whether there are any objects to which there are multiple
destructive references. If so, the program is rejected. We could
call this 'destruction analysis'.

The main difference with present day uniqueness is that destructiveness
is a property of a function's argument, not of an object, just like
strictness.

I understand that the implications of strictness-analysis and
destruction-analysis are quite different: an extra strictness-annotation
(the exclamation mark) will never cause the compiler to complain and may
only change the run time behaviour. An extra destruction-annotation
(we could re-use the asterisk for that) could naturally cause a
compilation error.

The only problem that I can imagine (paranoid as I am) is that
detecting that there are multiple destructive references to an object is
undecidable and that there is no practical safe approximation of such
a detection.

So: has this line of thought ever been explored?

Regards Erik Zuurbier