uniqueness and strictness unified?

Sjaak Smetsers sjakie@cs.kun.nl
Wed, 11 Mar 1998 12:28:38 +0100


>I have been working with Clean and its uniqueness for some years
>and I have always found the uniqueness checker more restrictive
>than I thought it should be. For the purposes of uniqueness, IO
>and destructive updates, I reasoned that it should not be a problem
>to have multiple read-only references to the object, as long as there
>is at most one destructive reference. The code generator should then make
>sure that all read-only references have vanished (resolved) before
>the destructive update takes place. Not so with the actual uniqueness
>checker.

This idea is incorporated in the latest Clean version. By using the strict
let construct you can indicate 'observing accesses'. These observing
accesses can be combined with a single destructive access performed after
the strict let is evaluated. For further details, see the refence manual.

 >If a function is strict in an argument, we annotate that and then
>the compiler knows that "this function surely needs this argument".

>In this spirit, why should a function that may destroy an argument
>(or change it in place) not be annotated as such, so that the compiler
>knows that "this function may destroy this argument". We could say
>that the function has a 'destructive reference' to that particular argument.
>
>A program analysis along the lines of strictness-analysis might then
>check whether there are any objects to which there are multiple
>destructive references. If so, the program is rejected. We could
>call this 'destruction analysis'.
>
>The main difference with present day uniqueness is that destructiveness
>is a property of a function's argument, not of an object, just like
>strictness.

Uniqueness is NOT a property of objects but of a function's argument. So i
do not see the diffence between your idea and uniqueness typing.

Regards

Sjaak Smetsers