[clean-list] Simplified uniqueness typing
Marco Kesseler
m.kesseler at xs4all.nl
Thu Nov 20 23:33:49 MET 2008
Hi Erik,
On Nov 20, 2008, at 2:35 PM, <zuurb078 at planet.nl> wrote:
>
> Clean already has modest ways to split off many unique objects off
> of one 'mother' object and at the end recombine them: You can open
> many unique files based on one unique file system. This is easily
> implemented because there is only one filesystem: whenever you close
> a unique file, you can be sure that it is closed into the right
> filesystem.
>
> But this is not a general feature that you could for instance apply
> to a unique array. A difficulty is that when 'closing' a part of the
> unique array split off earlier, you should close it into the same
> mother-array. There is no static test the compiler can apply to
> ensure this. Also, in case of an array, when you split off unique
> sub-arrays, they should be non overlapping. The compiler cannot test
> this in general.
The compiler does not need to test this. It just needs to trust a
standard construct that does the splitting and recombining for you.
Like, say, a parallel map.
I know nothing about simplified uniqueness typing, but here are a few
thoughts w.r.t. parallelism (and hopefully still right):
* First of all, using the file system as an example distracts a bit
from the issues in parallel processing. It is the real-world
requirements that dictate that a file should be recombined (closed) in
the file system that it came from. If we let go of these worldly
concerns, things are different. I can remove a unique element from a
unique array A and put it in another (unique) array B without ever
recombining it it A again.
* There is no need to always create unique objects out of other unique
things, nor do unique objects that derive from the same source need to
be "disjunct", nor do they always need to be temporarily "removed"
from their source. It is perfectly possible to create a number of
unique copies of some array (unique or not) and hand these out to a
number of parallel processes.
* It is also possible to have a number of parallel processes
simultaneously construct (disjunct parts of) some unique structure.
The only thing one needs to ensure, is that the single consumer of
this unique structure cannot access it while it is being constructed,
or at least that it cannot not access the parts that are being
constructed until they are finished.
In the end, I expect that we will find that uniqueness typing does not
really hinder parallel processing, but also that it does not play an
important role in enabling/promoting it.
best regards,
Marco
>
> Van: clean-list-bounces at science.ru.nl namens Jari-Matti Mäkelä
> Verzonden: do 20-11-2008 12:12
> Aan: clean-list at science.ru.nl
> Onderwerp: [clean-list] Simplified uniqueness typing
>
> Hi
>
> are there any plans to adopt the simplified uniqueness typing system
> (http://www.cs.tcd.ie/~devriese/pub/ifl07-paper.pdf) in the coming
> versions of
> Clean?
>
> Another question - if Clean continues to support parallel/concurrent
> programming some day, does uniqueness typing support dividing a
> massive
> computational task to several threads and then combining the results
> or does
> it disallow this. Are there alternative approaches to this problem?
>
> _______________________________________________
> clean-list mailing list
> clean-list at science.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/clean-list
>
> _______________________________________________
> clean-list mailing list
> clean-list at science.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/clean-list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.science.ru.nl/pipermail/clean-list/attachments/20081120/46915ed1/attachment.html
More information about the clean-list
mailing list